Test Report: Docker_Linux_crio_arm64 21681

                    
                      595bbf5b740d7896a57580209f3c1775d52404c7:2025-10-08:41822
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.58
35 TestAddons/parallel/Registry 15.61
36 TestAddons/parallel/RegistryCreds 0.54
37 TestAddons/parallel/Ingress 144.34
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 6.44
41 TestAddons/parallel/CSI 47.88
42 TestAddons/parallel/Headlamp 3.73
43 TestAddons/parallel/CloudSpanner 5.35
44 TestAddons/parallel/LocalPath 9.92
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 6.25
52 TestForceSystemdFlag 513.24
53 TestForceSystemdEnv 522
98 TestFunctional/parallel/ServiceCmdConnect 603.57
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.87
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
136 TestFunctional/parallel/ServiceCmd/Format 0.54
137 TestFunctional/parallel/ServiceCmd/URL 0.51
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.53
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.42
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.28
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
191 TestJSONOutput/pause/Command 2.26
197 TestJSONOutput/unpause/Command 1.82
261 TestPause/serial/Pause 7.84
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.49
303 TestStartStop/group/old-k8s-version/serial/Pause 6.16
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.57
314 TestStartStop/group/no-preload/serial/Pause 7.54
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.51
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.65
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.38
336 TestStartStop/group/embed-certs/serial/Pause 8.12
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.53
348 TestStartStop/group/newest-cni/serial/Pause 6.3
x
+
TestAddons/serial/Volcano (0.58s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable volcano --alsologtostderr -v=1: exit status 11 (574.765262ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:54:28.929848   11011 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:54:28.930560   11011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:28.930575   11011 out.go:374] Setting ErrFile to fd 2...
	I1008 21:54:28.930582   11011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:28.930902   11011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:54:28.931225   11011 mustload.go:65] Loading cluster: addons-961288
	I1008 21:54:28.931728   11011 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:28.931774   11011 addons.go:606] checking whether the cluster is paused
	I1008 21:54:28.931929   11011 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:28.931953   11011 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:54:28.932464   11011 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:54:28.969937   11011 ssh_runner.go:195] Run: systemctl --version
	I1008 21:54:28.970055   11011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:54:28.992317   11011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:54:29.100269   11011 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:54:29.100355   11011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:54:29.128923   11011 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:54:29.128981   11011 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:54:29.129001   11011 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:54:29.129024   11011 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:54:29.129044   11011 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:54:29.129067   11011 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:54:29.129088   11011 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:54:29.129111   11011 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:54:29.129136   11011 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:54:29.129162   11011 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:54:29.129185   11011 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:54:29.129208   11011 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:54:29.129239   11011 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:54:29.129258   11011 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:54:29.129280   11011 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:54:29.129311   11011 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:54:29.129342   11011 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:54:29.129372   11011 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:54:29.129394   11011 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:54:29.129417   11011 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:54:29.129443   11011 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:54:29.129464   11011 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:54:29.129483   11011 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:54:29.129514   11011 cri.go:89] found id: ""
	I1008 21:54:29.129624   11011 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:54:29.145218   11011 out.go:203] 
	W1008 21:54:29.148141   11011 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:54:29.148171   11011 out.go:285] * 
	* 
	W1008 21:54:29.423342   11011 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:54:29.426356   11011 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.811936ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00311542s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00709072s
addons_test.go:392: (dbg) Run:  kubectl --context addons-961288 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-961288 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-961288 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.102987627s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 ip
2025/10/08 21:54:55 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable registry --alsologtostderr -v=1: exit status 11 (247.508023ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:54:55.308907   11585 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:54:55.309155   11585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:55.309184   11585 out.go:374] Setting ErrFile to fd 2...
	I1008 21:54:55.309208   11585 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:55.309532   11585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:54:55.309889   11585 mustload.go:65] Loading cluster: addons-961288
	I1008 21:54:55.310333   11585 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:55.310373   11585 addons.go:606] checking whether the cluster is paused
	I1008 21:54:55.310533   11585 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:55.310570   11585 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:54:55.311143   11585 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:54:55.328002   11585 ssh_runner.go:195] Run: systemctl --version
	I1008 21:54:55.328137   11585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:54:55.345353   11585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:54:55.448217   11585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:54:55.448296   11585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:54:55.477463   11585 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:54:55.477482   11585 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:54:55.477487   11585 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:54:55.477491   11585 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:54:55.477495   11585 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:54:55.477499   11585 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:54:55.477502   11585 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:54:55.477505   11585 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:54:55.477508   11585 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:54:55.477514   11585 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:54:55.477517   11585 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:54:55.477520   11585 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:54:55.477523   11585 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:54:55.477526   11585 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:54:55.477529   11585 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:54:55.477534   11585 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:54:55.477538   11585 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:54:55.477542   11585 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:54:55.477546   11585 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:54:55.477549   11585 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:54:55.477553   11585 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:54:55.477557   11585 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:54:55.477561   11585 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:54:55.477563   11585 cri.go:89] found id: ""
	I1008 21:54:55.477611   11585 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:54:55.492711   11585 out.go:203] 
	W1008 21:54:55.495888   11585 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:54:55.495922   11585 out.go:285] * 
	* 
	W1008 21:54:55.500206   11585 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:54:55.503179   11585 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.61s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.101952ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-961288
addons_test.go:332: (dbg) Run:  kubectl --context addons-961288 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (263.459818ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:49.533213   13608 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:49.533878   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:49.533896   13608 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:49.533929   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:49.534347   13608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:49.534761   13608 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:49.535531   13608 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:49.535553   13608 addons.go:606] checking whether the cluster is paused
	I1008 21:55:49.535750   13608 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:49.535788   13608 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:49.536614   13608 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:49.555559   13608 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:49.555617   13608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:49.576970   13608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:49.680442   13608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:49.680534   13608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:49.711038   13608 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:49.711059   13608 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:49.711064   13608 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:49.711068   13608 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:49.711072   13608 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:49.711075   13608 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:49.711078   13608 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:49.711081   13608 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:49.711084   13608 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:49.711090   13608 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:49.711093   13608 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:49.711096   13608 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:49.711100   13608 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:49.711102   13608 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:49.711105   13608 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:49.711111   13608 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:49.711114   13608 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:49.711119   13608 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:49.711122   13608 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:49.711125   13608 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:49.711131   13608 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:49.711134   13608 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:49.711137   13608 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:49.711140   13608 cri.go:89] found id: ""
	I1008 21:55:49.711192   13608 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:49.726338   13608 out.go:203] 
	W1008 21:55:49.729263   13608 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:49.729296   13608 out.go:285] * 
	* 
	W1008 21:55:49.733731   13608 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:49.736566   13608 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-961288 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-961288 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-961288 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3922fea2-7219-4b63-a27d-be3e0a81fdad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3922fea2-7219-4b63-a27d-be3e0a81fdad] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003326602s
I1008 21:55:27.931972    4286 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.562614974s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-961288 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-961288
helpers_test.go:243: (dbg) docker inspect addons-961288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be",
	        "Created": "2025-10-08T21:52:02.301949344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T21:52:02.363734503Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/hostname",
	        "HostsPath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/hosts",
	        "LogPath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be-json.log",
	        "Name": "/addons-961288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-961288:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-961288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be",
	                "LowerDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-961288",
	                "Source": "/var/lib/docker/volumes/addons-961288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-961288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-961288",
	                "name.minikube.sigs.k8s.io": "addons-961288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3cdab071a35cfac65641a7acaae834bd541793bf285d0997896ec3452aa1c585",
	            "SandboxKey": "/var/run/docker/netns/3cdab071a35c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-961288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:59:00:e0:57:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f0bdf34367215c4acf2890eaad3f999c0ad12a34fb55be42e954c6184bdd2e9",
	                    "EndpointID": "1f1a2468cb2bbae3bad169dc7c81d4d6e0c375f16a39fd99b28b528ea741095d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-961288",
	                        "d45eb870dafc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-961288 -n addons-961288
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-961288 logs -n 25: (1.545946145s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-889641                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-889641 │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ start   │ --download-only -p binary-mirror-098672 --alsologtostderr --binary-mirror http://127.0.0.1:36433 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-098672   │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │                     │
	│ delete  │ -p binary-mirror-098672                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-098672   │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p addons-961288                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │                     │
	│ addons  │ disable dashboard -p addons-961288                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │                     │
	│ start   │ -p addons-961288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:54 UTC │
	│ addons  │ addons-961288 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ addons  │ addons-961288 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ addons  │ addons-961288 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ addons  │ addons-961288 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ ip      │ addons-961288 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │ 08 Oct 25 21:54 UTC │
	│ addons  │ addons-961288 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ ssh     │ addons-961288 ssh cat /opt/local-path-provisioner/pvc-8e4ef856-8168-49ac-bec5-fd30ac333963_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │ 08 Oct 25 21:55 UTC │
	│ addons  │ addons-961288 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ enable headlamp -p addons-961288 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-961288 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-961288 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-961288 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-961288 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ ssh     │ addons-961288 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-961288 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-961288 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-961288                                                                                                                                                                                                                                                                                                                                                                                           │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │ 08 Oct 25 21:55 UTC │
	│ addons  │ addons-961288 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ ip      │ addons-961288 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:57 UTC │ 08 Oct 25 21:57 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 21:51:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 21:51:36.292051    5049 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:51:36.292261    5049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:51:36.292276    5049 out.go:374] Setting ErrFile to fd 2...
	I1008 21:51:36.292282    5049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:51:36.292581    5049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:51:36.293078    5049 out.go:368] Setting JSON to false
	I1008 21:51:36.293921    5049 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2047,"bootTime":1759958250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 21:51:36.293991    5049 start.go:141] virtualization:  
	I1008 21:51:36.297380    5049 out.go:179] * [addons-961288] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 21:51:36.301138    5049 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 21:51:36.301173    5049 notify.go:220] Checking for updates...
	I1008 21:51:36.304164    5049 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 21:51:36.307369    5049 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 21:51:36.310175    5049 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 21:51:36.313040    5049 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 21:51:36.315989    5049 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 21:51:36.319059    5049 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 21:51:36.345759    5049 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 21:51:36.345957    5049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:51:36.414608    5049 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-08 21:51:36.405472994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:51:36.414717    5049 docker.go:318] overlay module found
	I1008 21:51:36.417818    5049 out.go:179] * Using the docker driver based on user configuration
	I1008 21:51:36.420759    5049 start.go:305] selected driver: docker
	I1008 21:51:36.420797    5049 start.go:925] validating driver "docker" against <nil>
	I1008 21:51:36.420813    5049 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 21:51:36.421561    5049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:51:36.475047    5049 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-08 21:51:36.466315739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:51:36.475215    5049 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 21:51:36.475442    5049 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 21:51:36.478472    5049 out.go:179] * Using Docker driver with root privileges
	I1008 21:51:36.481364    5049 cni.go:84] Creating CNI manager for ""
	I1008 21:51:36.481438    5049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:51:36.481449    5049 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 21:51:36.481527    5049 start.go:349] cluster config:
	{Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1008 21:51:36.484733    5049 out.go:179] * Starting "addons-961288" primary control-plane node in "addons-961288" cluster
	I1008 21:51:36.487504    5049 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 21:51:36.490406    5049 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 21:51:36.493297    5049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:51:36.493356    5049 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 21:51:36.493371    5049 cache.go:58] Caching tarball of preloaded images
	I1008 21:51:36.493390    5049 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 21:51:36.493458    5049 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 21:51:36.493467    5049 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 21:51:36.493823    5049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/config.json ...
	I1008 21:51:36.493891    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/config.json: {Name:mk705f89e8e849311d188624c5dd93d0bb86e461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:51:36.509420    5049 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 21:51:36.509573    5049 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1008 21:51:36.509597    5049 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1008 21:51:36.509602    5049 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1008 21:51:36.509610    5049 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1008 21:51:36.509615    5049 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1008 21:51:54.772520    5049 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1008 21:51:54.772571    5049 cache.go:232] Successfully downloaded all kic artifacts
	I1008 21:51:54.772600    5049 start.go:360] acquireMachinesLock for addons-961288: {Name:mkdb9a642333218a6563588e9d25960d2f4ebc46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 21:51:54.772733    5049 start.go:364] duration metric: took 111.303µs to acquireMachinesLock for "addons-961288"
	I1008 21:51:54.772766    5049 start.go:93] Provisioning new machine with config: &{Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 21:51:54.772853    5049 start.go:125] createHost starting for "" (driver="docker")
	I1008 21:51:54.776366    5049 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1008 21:51:54.776658    5049 start.go:159] libmachine.API.Create for "addons-961288" (driver="docker")
	I1008 21:51:54.776709    5049 client.go:168] LocalClient.Create starting
	I1008 21:51:54.776849    5049 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 21:51:54.903661    5049 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 21:51:55.094724    5049 cli_runner.go:164] Run: docker network inspect addons-961288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 21:51:55.111344    5049 cli_runner.go:211] docker network inspect addons-961288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 21:51:55.111436    5049 network_create.go:284] running [docker network inspect addons-961288] to gather additional debugging logs...
	I1008 21:51:55.111458    5049 cli_runner.go:164] Run: docker network inspect addons-961288
	W1008 21:51:55.128544    5049 cli_runner.go:211] docker network inspect addons-961288 returned with exit code 1
	I1008 21:51:55.128576    5049 network_create.go:287] error running [docker network inspect addons-961288]: docker network inspect addons-961288: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-961288 not found
	I1008 21:51:55.128602    5049 network_create.go:289] output of [docker network inspect addons-961288]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-961288 not found
	
	** /stderr **
	I1008 21:51:55.128710    5049 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 21:51:55.145015    5049 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b1130}
	I1008 21:51:55.145054    5049 network_create.go:124] attempt to create docker network addons-961288 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 21:51:55.145107    5049 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-961288 addons-961288
	I1008 21:51:55.197785    5049 network_create.go:108] docker network addons-961288 192.168.49.0/24 created
	I1008 21:51:55.197820    5049 kic.go:121] calculated static IP "192.168.49.2" for the "addons-961288" container
	I1008 21:51:55.197887    5049 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 21:51:55.213469    5049 cli_runner.go:164] Run: docker volume create addons-961288 --label name.minikube.sigs.k8s.io=addons-961288 --label created_by.minikube.sigs.k8s.io=true
	I1008 21:51:55.232226    5049 oci.go:103] Successfully created a docker volume addons-961288
	I1008 21:51:55.232319    5049 cli_runner.go:164] Run: docker run --rm --name addons-961288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961288 --entrypoint /usr/bin/test -v addons-961288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 21:51:57.684901    5049 cli_runner.go:217] Completed: docker run --rm --name addons-961288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961288 --entrypoint /usr/bin/test -v addons-961288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.452542691s)
	I1008 21:51:57.684931    5049 oci.go:107] Successfully prepared a docker volume addons-961288
	I1008 21:51:57.684966    5049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:51:57.684984    5049 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 21:51:57.685058    5049 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-961288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 21:52:02.228999    5049 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-961288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.543903536s)
	I1008 21:52:02.229034    5049 kic.go:203] duration metric: took 4.544046215s to extract preloaded images to volume ...
	W1008 21:52:02.229188    5049 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 21:52:02.229316    5049 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 21:52:02.286655    5049 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-961288 --name addons-961288 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961288 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-961288 --network addons-961288 --ip 192.168.49.2 --volume addons-961288:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 21:52:02.618199    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Running}}
	I1008 21:52:02.638020    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:02.660886    5049 cli_runner.go:164] Run: docker exec addons-961288 stat /var/lib/dpkg/alternatives/iptables
	I1008 21:52:02.711872    5049 oci.go:144] the created container "addons-961288" has a running status.
	I1008 21:52:02.711899    5049 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa...
	I1008 21:52:02.970470    5049 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 21:52:03.001850    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:03.025275    5049 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 21:52:03.025301    5049 kic_runner.go:114] Args: [docker exec --privileged addons-961288 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 21:52:03.106051    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:03.128498    5049 machine.go:93] provisionDockerMachine start ...
	I1008 21:52:03.128584    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:03.155603    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:03.155926    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:03.155935    5049 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 21:52:03.156535    5049 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33340->127.0.0.1:32768: read: connection reset by peer
	I1008 21:52:06.305066    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-961288
	
	I1008 21:52:06.305087    5049 ubuntu.go:182] provisioning hostname "addons-961288"
	I1008 21:52:06.305147    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:06.322911    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:06.323253    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:06.323272    5049 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-961288 && echo "addons-961288" | sudo tee /etc/hostname
	I1008 21:52:06.474679    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-961288
	
	I1008 21:52:06.474753    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:06.491657    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:06.491964    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:06.491986    5049 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-961288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-961288/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-961288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 21:52:06.637802    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 21:52:06.637840    5049 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 21:52:06.637863    5049 ubuntu.go:190] setting up certificates
	I1008 21:52:06.637874    5049 provision.go:84] configureAuth start
	I1008 21:52:06.637943    5049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961288
	I1008 21:52:06.655395    5049 provision.go:143] copyHostCerts
	I1008 21:52:06.655489    5049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 21:52:06.655620    5049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 21:52:06.655683    5049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 21:52:06.655740    5049 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.addons-961288 san=[127.0.0.1 192.168.49.2 addons-961288 localhost minikube]
	I1008 21:52:06.921476    5049 provision.go:177] copyRemoteCerts
	I1008 21:52:06.921544    5049 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 21:52:06.921587    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:06.938536    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.041222    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 21:52:07.058842    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 21:52:07.075595    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 21:52:07.092175    5049 provision.go:87] duration metric: took 454.274831ms to configureAuth
	I1008 21:52:07.092244    5049 ubuntu.go:206] setting minikube options for container-runtime
	I1008 21:52:07.092448    5049 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:52:07.092562    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.110254    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:07.110548    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:07.110566    5049 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 21:52:07.359291    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 21:52:07.359330    5049 machine.go:96] duration metric: took 4.230798217s to provisionDockerMachine
	I1008 21:52:07.359339    5049 client.go:171] duration metric: took 12.582618409s to LocalClient.Create
	I1008 21:52:07.359352    5049 start.go:167] duration metric: took 12.582694094s to libmachine.API.Create "addons-961288"
	I1008 21:52:07.359363    5049 start.go:293] postStartSetup for "addons-961288" (driver="docker")
	I1008 21:52:07.359373    5049 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 21:52:07.359449    5049 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 21:52:07.359494    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.377809    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.481917    5049 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 21:52:07.485093    5049 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 21:52:07.485124    5049 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 21:52:07.485135    5049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 21:52:07.485200    5049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 21:52:07.485231    5049 start.go:296] duration metric: took 125.861643ms for postStartSetup
	I1008 21:52:07.485539    5049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961288
	I1008 21:52:07.502685    5049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/config.json ...
	I1008 21:52:07.502985    5049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 21:52:07.503040    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.520700    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.618994    5049 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 21:52:07.624275    5049 start.go:128] duration metric: took 12.851405954s to createHost
	I1008 21:52:07.624299    5049 start.go:83] releasing machines lock for "addons-961288", held for 12.851552703s
	I1008 21:52:07.624390    5049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961288
	I1008 21:52:07.641455    5049 ssh_runner.go:195] Run: cat /version.json
	I1008 21:52:07.641505    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.641511    5049 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 21:52:07.641572    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.660354    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.670530    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.842903    5049 ssh_runner.go:195] Run: systemctl --version
	I1008 21:52:07.849026    5049 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 21:52:07.884099    5049 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 21:52:07.888213    5049 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 21:52:07.888336    5049 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 21:52:07.916413    5049 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 21:52:07.916448    5049 start.go:495] detecting cgroup driver to use...
	I1008 21:52:07.916480    5049 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 21:52:07.916547    5049 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 21:52:07.933357    5049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 21:52:07.945513    5049 docker.go:218] disabling cri-docker service (if available) ...
	I1008 21:52:07.945575    5049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 21:52:07.963459    5049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 21:52:07.982140    5049 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 21:52:08.106168    5049 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 21:52:08.228457    5049 docker.go:234] disabling docker service ...
	I1008 21:52:08.228522    5049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 21:52:08.249141    5049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 21:52:08.262379    5049 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 21:52:08.381443    5049 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 21:52:08.499889    5049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 21:52:08.512602    5049 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 21:52:08.527273    5049 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 21:52:08.527339    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.536228    5049 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 21:52:08.536294    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.544989    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.553685    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.562202    5049 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 21:52:08.570825    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.579667    5049 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.592849    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.601648    5049 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 21:52:08.609236    5049 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 21:52:08.609318    5049 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 21:52:08.622755    5049 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 21:52:08.630516    5049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 21:52:08.742422    5049 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 21:52:08.867707    5049 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 21:52:08.867825    5049 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 21:52:08.871512    5049 start.go:563] Will wait 60s for crictl version
	I1008 21:52:08.871600    5049 ssh_runner.go:195] Run: which crictl
	I1008 21:52:08.875359    5049 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 21:52:08.904382    5049 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 21:52:08.904578    5049 ssh_runner.go:195] Run: crio --version
	I1008 21:52:08.937222    5049 ssh_runner.go:195] Run: crio --version
	I1008 21:52:08.969443    5049 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 21:52:08.972274    5049 cli_runner.go:164] Run: docker network inspect addons-961288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 21:52:08.987381    5049 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 21:52:08.991330    5049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 21:52:09.001167    5049 kubeadm.go:883] updating cluster {Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 21:52:09.001283    5049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:52:09.001373    5049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 21:52:09.038423    5049 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 21:52:09.038447    5049 crio.go:433] Images already preloaded, skipping extraction
	I1008 21:52:09.038502    5049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 21:52:09.067265    5049 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 21:52:09.067286    5049 cache_images.go:85] Images are preloaded, skipping loading
	I1008 21:52:09.067295    5049 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 21:52:09.067385    5049 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-961288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 21:52:09.067474    5049 ssh_runner.go:195] Run: crio config
	I1008 21:52:09.120765    5049 cni.go:84] Creating CNI manager for ""
	I1008 21:52:09.120791    5049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:52:09.120839    5049 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 21:52:09.120868    5049 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-961288 NodeName:addons-961288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 21:52:09.121050    5049 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-961288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 21:52:09.121168    5049 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 21:52:09.128885    5049 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 21:52:09.128977    5049 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 21:52:09.136807    5049 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1008 21:52:09.149457    5049 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 21:52:09.164379    5049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1008 21:52:09.177389    5049 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 21:52:09.181027    5049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 21:52:09.190758    5049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 21:52:09.294419    5049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 21:52:09.314125    5049 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288 for IP: 192.168.49.2
	I1008 21:52:09.314188    5049 certs.go:195] generating shared ca certs ...
	I1008 21:52:09.314219    5049 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:09.314397    5049 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 21:52:09.426815    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt ...
	I1008 21:52:09.426845    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt: {Name:mka3917889a100f4c1dcc59b106b117a87bc8e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:09.427033    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key ...
	I1008 21:52:09.427046    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key: {Name:mkb00cad5a1a442be62fc42dd2dd6615aa701bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:09.427138    5049 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 21:52:10.005204    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt ...
	I1008 21:52:10.005240    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt: {Name:mke7237f65caf5c4ac41b833cf33815e54380d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:10.005434    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key ...
	I1008 21:52:10.005443    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key: {Name:mke0fe2ca068371875b4dd6e540113cc51c1c087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:10.005511    5049 certs.go:257] generating profile certs ...
	I1008 21:52:10.005584    5049 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.key
	I1008 21:52:10.005600    5049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt with IP's: []
	I1008 21:52:11.414091    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt ...
	I1008 21:52:11.414122    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: {Name:mk21ad367910f0f6fa334a16944294025b7939aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.414312    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.key ...
	I1008 21:52:11.414324    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.key: {Name:mk1a8dd0fed5e7d7a3722edb5e3a8baf9cf375a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.414407    5049 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217
	I1008 21:52:11.414428    5049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 21:52:11.597366    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217 ...
	I1008 21:52:11.597389    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217: {Name:mk553d0c64138e528dbe64b1cb0d06d3de9b99e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.597537    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217 ...
	I1008 21:52:11.597546    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217: {Name:mke7aee446fe9d079fd0797289ac5b1e60fe3660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.597615    5049 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt
	I1008 21:52:11.597725    5049 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key
	I1008 21:52:11.597775    5049 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key
	I1008 21:52:11.597791    5049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt with IP's: []
	I1008 21:52:11.921739    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt ...
	I1008 21:52:11.921772    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt: {Name:mk6fa9bf513c72e6bcfbc7e02d11980100d655d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.921958    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key ...
	I1008 21:52:11.921970    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key: {Name:mke7f79c1430c2088347633b162587d54537eea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.922171    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 21:52:11.922214    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 21:52:11.922245    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 21:52:11.922279    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 21:52:11.922895    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 21:52:11.941146    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 21:52:11.959233    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 21:52:11.976799    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 21:52:11.994315    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 21:52:12.014028    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 21:52:12.032846    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 21:52:12.051387    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 21:52:12.069246    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 21:52:12.088062    5049 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 21:52:12.100796    5049 ssh_runner.go:195] Run: openssl version
	I1008 21:52:12.107357    5049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 21:52:12.115848    5049 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 21:52:12.120760    5049 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 21:52:12.120875    5049 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 21:52:12.161476    5049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 21:52:12.169723    5049 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 21:52:12.172960    5049 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 21:52:12.173012    5049 kubeadm.go:400] StartCluster: {Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 21:52:12.173084    5049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:52:12.173137    5049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:52:12.199482    5049 cri.go:89] found id: ""
	I1008 21:52:12.199574    5049 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 21:52:12.207402    5049 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 21:52:12.215073    5049 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 21:52:12.215180    5049 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 21:52:12.223286    5049 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 21:52:12.223306    5049 kubeadm.go:157] found existing configuration files:
	
	I1008 21:52:12.223362    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 21:52:12.230890    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 21:52:12.230953    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 21:52:12.238335    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 21:52:12.246145    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 21:52:12.246208    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 21:52:12.253766    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 21:52:12.261411    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 21:52:12.261530    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 21:52:12.269016    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 21:52:12.276758    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 21:52:12.276888    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 21:52:12.284457    5049 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 21:52:12.327718    5049 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 21:52:12.327945    5049 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 21:52:12.364191    5049 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 21:52:12.364271    5049 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 21:52:12.364315    5049 kubeadm.go:318] OS: Linux
	I1008 21:52:12.364368    5049 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 21:52:12.364423    5049 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 21:52:12.364475    5049 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 21:52:12.364529    5049 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 21:52:12.364583    5049 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 21:52:12.364668    5049 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 21:52:12.364721    5049 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 21:52:12.364776    5049 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 21:52:12.364828    5049 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 21:52:12.438474    5049 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 21:52:12.438594    5049 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 21:52:12.438695    5049 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 21:52:12.450041    5049 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 21:52:12.454426    5049 out.go:252]   - Generating certificates and keys ...
	I1008 21:52:12.454599    5049 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 21:52:12.454685    5049 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 21:52:12.571138    5049 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 21:52:12.955634    5049 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 21:52:13.319000    5049 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 21:52:14.216336    5049 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 21:52:15.412074    5049 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 21:52:15.412361    5049 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-961288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 21:52:16.105987    5049 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 21:52:16.106143    5049 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-961288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 21:52:16.564088    5049 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 21:52:16.948128    5049 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 21:52:17.466038    5049 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 21:52:17.466317    5049 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 21:52:17.824288    5049 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 21:52:19.511346    5049 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 21:52:19.721588    5049 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 21:52:20.666938    5049 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 21:52:20.940338    5049 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 21:52:20.941093    5049 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 21:52:20.943736    5049 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 21:52:20.947177    5049 out.go:252]   - Booting up control plane ...
	I1008 21:52:20.947282    5049 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 21:52:20.947383    5049 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 21:52:20.947454    5049 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 21:52:20.963884    5049 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 21:52:20.964246    5049 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 21:52:20.971727    5049 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 21:52:20.972076    5049 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 21:52:20.972299    5049 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 21:52:21.110140    5049 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 21:52:21.110265    5049 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 21:52:23.105430    5049 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001278731s
	I1008 21:52:23.109213    5049 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 21:52:23.109319    5049 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 21:52:23.109417    5049 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 21:52:23.109504    5049 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 21:52:27.045576    5049 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.93571035s
	I1008 21:52:27.450260    5049 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.341065181s
	I1008 21:52:29.113569    5049 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002314528s
	I1008 21:52:29.131119    5049 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 21:52:29.146805    5049 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 21:52:29.160880    5049 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 21:52:29.161099    5049 kubeadm.go:318] [mark-control-plane] Marking the node addons-961288 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 21:52:29.177037    5049 kubeadm.go:318] [bootstrap-token] Using token: s30xba.14zmtly2zm02vci8
	I1008 21:52:29.182153    5049 out.go:252]   - Configuring RBAC rules ...
	I1008 21:52:29.182288    5049 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 21:52:29.184464    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 21:52:29.192621    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 21:52:29.196660    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 21:52:29.202566    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 21:52:29.206577    5049 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 21:52:29.520056    5049 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 21:52:29.949987    5049 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 21:52:30.518664    5049 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 21:52:30.520176    5049 kubeadm.go:318] 
	I1008 21:52:30.520291    5049 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 21:52:30.520315    5049 kubeadm.go:318] 
	I1008 21:52:30.520400    5049 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 21:52:30.520406    5049 kubeadm.go:318] 
	I1008 21:52:30.520432    5049 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 21:52:30.520573    5049 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 21:52:30.520637    5049 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 21:52:30.520643    5049 kubeadm.go:318] 
	I1008 21:52:30.520699    5049 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 21:52:30.520704    5049 kubeadm.go:318] 
	I1008 21:52:30.520754    5049 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 21:52:30.520759    5049 kubeadm.go:318] 
	I1008 21:52:30.520813    5049 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 21:52:30.520891    5049 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 21:52:30.520962    5049 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 21:52:30.520967    5049 kubeadm.go:318] 
	I1008 21:52:30.521077    5049 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 21:52:30.521158    5049 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 21:52:30.521163    5049 kubeadm.go:318] 
	I1008 21:52:30.521251    5049 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s30xba.14zmtly2zm02vci8 \
	I1008 21:52:30.521358    5049 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 21:52:30.521380    5049 kubeadm.go:318] 	--control-plane 
	I1008 21:52:30.521385    5049 kubeadm.go:318] 
	I1008 21:52:30.521477    5049 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 21:52:30.521481    5049 kubeadm.go:318] 
	I1008 21:52:30.521581    5049 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s30xba.14zmtly2zm02vci8 \
	I1008 21:52:30.521712    5049 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 21:52:30.525791    5049 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 21:52:30.526037    5049 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 21:52:30.526146    5049 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 21:52:30.526161    5049 cni.go:84] Creating CNI manager for ""
	I1008 21:52:30.526169    5049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:52:30.529336    5049 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 21:52:30.532292    5049 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 21:52:30.536409    5049 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 21:52:30.536428    5049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 21:52:30.549494    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 21:52:30.838851    5049 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 21:52:30.838901    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:30.838965    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-961288 minikube.k8s.io/updated_at=2025_10_08T21_52_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=addons-961288 minikube.k8s.io/primary=true
	I1008 21:52:30.975855    5049 ops.go:34] apiserver oom_adj: -16
	I1008 21:52:30.975968    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:31.476637    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:31.976778    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:32.476850    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:32.976552    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:33.476126    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:33.976975    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:34.476068    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:34.976589    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:35.074858    5049 kubeadm.go:1113] duration metric: took 4.23600977s to wait for elevateKubeSystemPrivileges
	I1008 21:52:35.074890    5049 kubeadm.go:402] duration metric: took 22.901882517s to StartCluster
	I1008 21:52:35.074907    5049 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:35.075016    5049 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 21:52:35.075450    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:35.075648    5049 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 21:52:35.075810    5049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 21:52:35.076098    5049 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:52:35.076144    5049 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1008 21:52:35.076226    5049 addons.go:69] Setting yakd=true in profile "addons-961288"
	I1008 21:52:35.076244    5049 addons.go:238] Setting addon yakd=true in "addons-961288"
	I1008 21:52:35.076265    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.076762    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.077086    5049 addons.go:69] Setting inspektor-gadget=true in profile "addons-961288"
	I1008 21:52:35.077106    5049 addons.go:238] Setting addon inspektor-gadget=true in "addons-961288"
	I1008 21:52:35.077136    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.077559    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.077853    5049 addons.go:69] Setting metrics-server=true in profile "addons-961288"
	I1008 21:52:35.077880    5049 addons.go:238] Setting addon metrics-server=true in "addons-961288"
	I1008 21:52:35.077928    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.078334    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.084090    5049 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-961288"
	I1008 21:52:35.084164    5049 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-961288"
	I1008 21:52:35.084222    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.084836    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.087295    5049 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-961288"
	I1008 21:52:35.087342    5049 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-961288"
	I1008 21:52:35.087383    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.087852    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.090808    5049 addons.go:69] Setting cloud-spanner=true in profile "addons-961288"
	I1008 21:52:35.090847    5049 addons.go:238] Setting addon cloud-spanner=true in "addons-961288"
	I1008 21:52:35.090881    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.091442    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.095379    5049 addons.go:69] Setting registry=true in profile "addons-961288"
	I1008 21:52:35.095413    5049 addons.go:238] Setting addon registry=true in "addons-961288"
	I1008 21:52:35.095457    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.096151    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.098825    5049 addons.go:69] Setting registry-creds=true in profile "addons-961288"
	I1008 21:52:35.098895    5049 addons.go:238] Setting addon registry-creds=true in "addons-961288"
	I1008 21:52:35.108110    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.108608    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.099060    5049 addons.go:69] Setting storage-provisioner=true in profile "addons-961288"
	I1008 21:52:35.128247    5049 addons.go:238] Setting addon storage-provisioner=true in "addons-961288"
	I1008 21:52:35.128289    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.128763    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.099072    5049 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-961288"
	I1008 21:52:35.155763    5049 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-961288"
	I1008 21:52:35.156095    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.099079    5049 addons.go:69] Setting volcano=true in profile "addons-961288"
	I1008 21:52:35.174723    5049 addons.go:238] Setting addon volcano=true in "addons-961288"
	I1008 21:52:35.174765    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.175263    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.181593    5049 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1008 21:52:35.184488    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1008 21:52:35.184518    5049 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1008 21:52:35.184582    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.099085    5049 addons.go:69] Setting volumesnapshots=true in profile "addons-961288"
	I1008 21:52:35.189901    5049 addons.go:238] Setting addon volumesnapshots=true in "addons-961288"
	I1008 21:52:35.189942    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.190403    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.104502    5049 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-961288"
	I1008 21:52:35.206069    5049 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-961288"
	I1008 21:52:35.206106    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.206575    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.104518    5049 addons.go:69] Setting default-storageclass=true in profile "addons-961288"
	I1008 21:52:35.225451    5049 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-961288"
	I1008 21:52:35.225806    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.229607    5049 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1008 21:52:35.233772    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 21:52:35.233847    5049 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 21:52:35.233967    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.104528    5049 addons.go:69] Setting gcp-auth=true in profile "addons-961288"
	I1008 21:52:35.241735    5049 mustload.go:65] Loading cluster: addons-961288
	I1008 21:52:35.241931    5049 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:52:35.242189    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.270141    5049 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1008 21:52:35.104535    5049 addons.go:69] Setting ingress=true in profile "addons-961288"
	I1008 21:52:35.272180    5049 addons.go:238] Setting addon ingress=true in "addons-961288"
	I1008 21:52:35.272225    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.272674    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.272898    5049 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1008 21:52:35.272942    5049 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1008 21:52:35.273000    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.104542    5049 addons.go:69] Setting ingress-dns=true in profile "addons-961288"
	I1008 21:52:35.296841    5049 addons.go:238] Setting addon ingress-dns=true in "addons-961288"
	I1008 21:52:35.296890    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.297351    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.105343    5049 out.go:179] * Verifying Kubernetes components...
	I1008 21:52:35.353039    5049 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1008 21:52:35.381791    5049 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1008 21:52:35.382381    5049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 21:52:35.386569    5049 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-961288"
	I1008 21:52:35.386660    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.387128    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.392743    5049 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 21:52:35.392821    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1008 21:52:35.392914    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.416842    5049 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1008 21:52:35.420314    5049 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1008 21:52:35.423469    5049 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1008 21:52:35.423541    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1008 21:52:35.423593    5049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 21:52:35.423648    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.433680    5049 out.go:179]   - Using image docker.io/registry:3.0.0
	I1008 21:52:35.436027    5049 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1008 21:52:35.436051    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1008 21:52:35.436144    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.449715    5049 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1008 21:52:35.423469    5049 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1008 21:52:35.452892    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1008 21:52:35.453105    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	W1008 21:52:35.456309    5049 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1008 21:52:35.456622    5049 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1008 21:52:35.456645    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1008 21:52:35.456714    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.452805    5049 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 21:52:35.501520    5049 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 21:52:35.501595    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 21:52:35.501706    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.515128    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.552628    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1008 21:52:35.555372    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1008 21:52:35.555407    5049 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1008 21:52:35.555488    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.572794    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.573696    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1008 21:52:35.576660    5049 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1008 21:52:35.578002    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.589687    5049 out.go:179]   - Using image docker.io/busybox:stable
	I1008 21:52:35.589843    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1008 21:52:35.590770    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.594329    5049 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 21:52:35.594352    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1008 21:52:35.594422    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.598841    5049 addons.go:238] Setting addon default-storageclass=true in "addons-961288"
	I1008 21:52:35.598879    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.599279    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.621324    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 21:52:35.627537    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1008 21:52:35.628140    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.632713    5049 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1008 21:52:35.635764    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 21:52:35.636902    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1008 21:52:35.636986    5049 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 21:52:35.640687    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1008 21:52:35.640776    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.644664    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1008 21:52:35.647434    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1008 21:52:35.647688    5049 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 21:52:35.647703    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1008 21:52:35.647766    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.657452    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1008 21:52:35.661764    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1008 21:52:35.667920    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1008 21:52:35.673589    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1008 21:52:35.673615    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1008 21:52:35.673795    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.678932    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.711273    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.718074    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.738872    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.761856    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.777377    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.795557    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.800182    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.806541    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	W1008 21:52:35.810325    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:35.810365    5049 retry.go:31] will retry after 269.6754ms: ssh: handshake failed: EOF
	W1008 21:52:35.810492    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:35.810499    5049 retry.go:31] will retry after 295.399508ms: ssh: handshake failed: EOF
	W1008 21:52:35.812493    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:35.812517    5049 retry.go:31] will retry after 267.839688ms: ssh: handshake failed: EOF
	I1008 21:52:35.829281    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.829717    5049 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 21:52:35.829728    5049 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 21:52:35.829775    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.864548    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	W1008 21:52:36.083042    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:36.083096    5049 retry.go:31] will retry after 195.013468ms: ssh: handshake failed: EOF
	W1008 21:52:36.112387    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:36.112417    5049 retry.go:31] will retry after 489.914771ms: ssh: handshake failed: EOF
	I1008 21:52:36.285169    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 21:52:36.285260    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1008 21:52:36.311522    5049 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1008 21:52:36.311599    5049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1008 21:52:36.392346    5049 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1008 21:52:36.392425    5049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1008 21:52:36.395176    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1008 21:52:36.395258    5049 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1008 21:52:36.403039    5049 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1008 21:52:36.403076    5049 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1008 21:52:36.413026    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1008 21:52:36.472025    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 21:52:36.472047    5049 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 21:52:36.519325    5049 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1008 21:52:36.519347    5049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1008 21:52:36.547045    5049 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:36.547115    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1008 21:52:36.552153    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 21:52:36.555238    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1008 21:52:36.555298    5049 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1008 21:52:36.562439    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 21:52:36.583477    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1008 21:52:36.592780    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 21:52:36.608955    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 21:52:36.631667    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:36.635468    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1008 21:52:36.651327    5049 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1008 21:52:36.651349    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1008 21:52:36.663968    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1008 21:52:36.663991    5049 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1008 21:52:36.667221    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1008 21:52:36.667239    5049 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1008 21:52:36.668302    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 21:52:36.668333    5049 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 21:52:36.803501    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1008 21:52:36.845351    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 21:52:36.847280    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1008 21:52:36.847303    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1008 21:52:36.853347    5049 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 21:52:36.853371    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1008 21:52:36.874086    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 21:52:36.973316    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1008 21:52:36.973342    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1008 21:52:37.089120    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 21:52:37.098052    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1008 21:52:37.225978    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1008 21:52:37.226006    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1008 21:52:37.256545    5049 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.832917562s)
	I1008 21:52:37.256611    5049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 21:52:37.256670    5049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.874273368s)
	I1008 21:52:37.256686    5049 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1008 21:52:37.363692    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1008 21:52:37.363727    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1008 21:52:37.448472    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 21:52:37.607451    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1008 21:52:37.607477    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1008 21:52:37.760572    5049 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-961288" context rescaled to 1 replicas
	I1008 21:52:37.827786    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1008 21:52:37.827813    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1008 21:52:37.971330    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1008 21:52:37.971356    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1008 21:52:38.127831    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1008 21:52:38.127855    5049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1008 21:52:38.349946    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1008 21:52:38.349970    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1008 21:52:38.595718    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1008 21:52:38.595742    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1008 21:52:38.808780    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.395671572s)
	I1008 21:52:38.808838    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.256624493s)
	I1008 21:52:38.809016    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.246500372s)
	I1008 21:52:38.809066    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.225519994s)
	I1008 21:52:38.809340    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 21:52:38.809357    5049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1008 21:52:39.045901    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 21:52:39.998133    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.405269611s)
	I1008 21:52:39.998217    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.389238111s)
	I1008 21:52:40.207686    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.575984142s)
	W1008 21:52:40.207722    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:40.207740    5049 retry.go:31] will retry after 340.361653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:40.207776    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.57228674s)
	I1008 21:52:40.207829    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.404303179s)
	I1008 21:52:40.207844    5049 addons.go:479] Verifying addon registry=true in "addons-961288"
	I1008 21:52:40.212798    5049 out.go:179] * Verifying registry addon...
	I1008 21:52:40.216444    5049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1008 21:52:40.229377    5049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 21:52:40.229402    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:40.462339    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.616953085s)
	I1008 21:52:40.462627    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.588513088s)
	I1008 21:52:40.462667    5049 addons.go:479] Verifying addon metrics-server=true in "addons-961288"
	I1008 21:52:40.548479    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:40.720406    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:41.238613    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:41.284727    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.19556687s)
	W1008 21:52:41.284810    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 21:52:41.284844    5049 retry.go:31] will retry after 341.860996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 21:52:41.284922    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.186843731s)
	I1008 21:52:41.285132    5049 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.028502219s)
	I1008 21:52:41.285914    5049 node_ready.go:35] waiting up to 6m0s for node "addons-961288" to be "Ready" ...
	I1008 21:52:41.288175    5049 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-961288 service yakd-dashboard -n yakd-dashboard
	
	I1008 21:52:41.627222    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 21:52:41.740569    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:41.863936    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.41543177s)
	I1008 21:52:41.864121    5049 addons.go:479] Verifying addon ingress=true in "addons-961288"
	I1008 21:52:41.867296    5049 out.go:179] * Verifying ingress addon...
	I1008 21:52:41.870894    5049 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1008 21:52:41.875846    5049 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1008 21:52:41.875909    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:42.233602    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:42.295012    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.249062771s)
	I1008 21:52:42.295101    5049 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-961288"
	I1008 21:52:42.298227    5049 out.go:179] * Verifying csi-hostpath-driver addon...
	I1008 21:52:42.301925    5049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1008 21:52:42.312376    5049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 21:52:42.312448    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:42.375423    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:42.380581    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.832023069s)
	W1008 21:52:42.380687    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:42.380723    5049 retry.go:31] will retry after 466.340843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:42.719989    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:42.735692    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.108376541s)
	I1008 21:52:42.820836    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:42.848163    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:42.874611    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:43.202477    5049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1008 21:52:43.202658    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:43.226336    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:43.238276    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	W1008 21:52:43.290818    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:43.307503    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:43.376375    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:43.381659    5049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1008 21:52:43.403319    5049 addons.go:238] Setting addon gcp-auth=true in "addons-961288"
	I1008 21:52:43.403370    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:43.403893    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:43.427040    5049 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1008 21:52:43.427111    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:43.453697    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:43.719615    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 21:52:43.737158    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:43.737237    5049 retry.go:31] will retry after 758.216086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:43.740787    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 21:52:43.743768    5049 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1008 21:52:43.746645    5049 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1008 21:52:43.746677    5049 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1008 21:52:43.760068    5049 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1008 21:52:43.760088    5049 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1008 21:52:43.773797    5049 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 21:52:43.773823    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1008 21:52:43.789757    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 21:52:43.806058    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:43.874011    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:44.223412    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:44.276708    5049 addons.go:479] Verifying addon gcp-auth=true in "addons-961288"
	I1008 21:52:44.280358    5049 out.go:179] * Verifying gcp-auth addon...
	I1008 21:52:44.283966    5049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1008 21:52:44.293910    5049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1008 21:52:44.293986    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:44.393579    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:44.393833    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:44.496177    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:44.719641    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:44.788919    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:44.805791    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:44.874446    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:45.220939    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:45.290168    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:45.294466    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:45.305957    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:45.375281    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:52:45.407783    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:45.407824    5049 retry.go:31] will retry after 795.029046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:45.719748    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:45.789158    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:45.805179    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:45.874074    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:46.203501    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:46.220217    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:46.288600    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:46.306028    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:46.374755    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:46.719367    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:46.789465    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:46.806044    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:46.874519    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:52:46.999490    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:46.999564    5049 retry.go:31] will retry after 1.486496131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:47.219393    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:47.288415    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:47.306222    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:47.373882    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:47.720256    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:47.788224    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:47.789078    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:47.805173    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:47.873728    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:48.219722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:48.287507    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:48.305209    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:48.374636    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:48.486811    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:48.720187    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:48.787222    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:48.805138    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:48.874131    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:49.220575    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 21:52:49.286921    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:49.286951    5049 retry.go:31] will retry after 2.262041796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:49.288365    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:49.305480    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:49.374367    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:49.719319    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:49.787333    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:49.805956    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:49.873829    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:50.220325    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:50.287378    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:50.289004    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:50.304574    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:50.375027    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:50.719800    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:50.787549    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:50.805419    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:50.874415    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:51.219090    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:51.286543    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:51.304936    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:51.374016    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:51.549417    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:51.719554    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:51.787843    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:51.805371    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:51.874304    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:52.220641    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:52.287454    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:52.289092    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:52.304578    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1008 21:52:52.377219    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:52.377299    5049 retry.go:31] will retry after 3.926801977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:52.390758    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:52.719643    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:52.788031    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:52.805362    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:52.874344    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:53.219270    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:53.287367    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:53.304831    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:53.374515    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:53.719261    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:53.788246    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:53.805555    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:53.874439    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:54.219888    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:54.288646    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:54.292089    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:54.305302    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:54.374457    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:54.719003    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:54.786983    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:54.805117    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:54.874102    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:55.220183    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:55.286767    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:55.305355    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:55.374500    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:55.719864    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:55.787860    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:55.805122    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:55.873970    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:56.220387    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:56.287236    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:56.304604    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:56.305009    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:56.374229    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:56.719524    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:56.787424    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:56.789748    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:56.806102    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:56.874805    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:52:57.108005    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:57.108074    5049 retry.go:31] will retry after 5.852321959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:57.219717    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:57.288517    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:57.305506    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:57.374194    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:57.720536    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:57.787441    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:57.805180    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:57.874376    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:58.220309    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:58.289077    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:58.305403    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:58.374855    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:58.719928    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:58.786817    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:58.805763    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:58.875013    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:59.220228    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:59.287085    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:59.288982    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:59.305878    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:59.373905    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:59.719741    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:59.787992    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:59.805253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:59.874007    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:00.221543    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:00.290582    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:00.322977    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:00.374690    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:00.720033    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:00.789538    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:00.805824    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:00.874832    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:01.219585    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:01.287932    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:01.289471    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:01.305516    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:01.374289    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:01.720121    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:01.786962    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:01.805493    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:01.874266    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:02.220156    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:02.287498    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:02.305104    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:02.374111    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:02.720327    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:02.787040    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:02.804753    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:02.874968    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:02.961411    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:03.219452    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:03.287706    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:03.289981    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:03.308698    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:03.374821    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:03.720305    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 21:53:03.760910    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:03.760941    5049 retry.go:31] will retry after 7.84841166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:03.786711    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:03.805172    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:03.874068    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:04.220090    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:04.289025    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:04.314556    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:04.374512    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:04.719623    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:04.787260    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:04.805179    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:04.875598    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:05.220075    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:05.286972    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:05.305392    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:05.375178    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:05.720458    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:05.787419    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:05.789246    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:05.804953    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:05.874117    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:06.220277    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:06.286968    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:06.305033    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:06.374215    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:06.720084    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:06.786793    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:06.804930    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:06.873835    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:07.220205    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:07.287272    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:07.305285    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:07.374051    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:07.720293    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:07.788123    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:07.789435    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:07.805344    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:07.874491    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:08.219585    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:08.287565    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:08.305385    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:08.375126    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:08.719232    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:08.787362    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:08.804770    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:08.874779    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:09.220373    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:09.287589    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:09.305569    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:09.374478    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:09.719438    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:09.787238    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:09.804920    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:09.874103    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:10.219261    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:10.288192    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:10.288998    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:10.305559    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:10.374666    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:10.719666    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:10.787814    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:10.805564    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:10.874535    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:11.220194    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:11.287074    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:11.304987    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:11.373817    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:11.610211    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:11.719437    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:11.787476    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:11.804986    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:11.874338    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:12.220068    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:12.288701    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:12.292721    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:12.306394    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:12.375295    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:53:12.414754    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:12.414828    5049 retry.go:31] will retry after 5.188779325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:12.720015    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:12.788814    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:12.805625    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:12.874424    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:13.219410    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:13.287191    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:13.305059    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:13.374068    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:13.720225    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:13.788021    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:13.804889    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:13.874752    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:14.220149    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:14.288932    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:14.305399    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:14.374434    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:14.719416    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:14.787396    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:14.789651    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:14.805268    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:14.874038    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:15.220176    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:15.287206    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:15.305314    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:15.374205    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:15.719249    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:15.787300    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:15.804671    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:15.874542    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:16.219157    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:16.286773    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:16.304819    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:16.374907    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:16.721963    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:16.787908    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:16.805776    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:16.874388    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:17.223589    5049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 21:53:17.223665    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:17.295354    5049 node_ready.go:49] node "addons-961288" is "Ready"
	I1008 21:53:17.295424    5049 node_ready.go:38] duration metric: took 36.009455597s for node "addons-961288" to be "Ready" ...
	I1008 21:53:17.295451    5049 api_server.go:52] waiting for apiserver process to appear ...
	I1008 21:53:17.295538    5049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 21:53:17.301535    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:17.328120    5049 api_server.go:72] duration metric: took 42.252435598s to wait for apiserver process to appear ...
	I1008 21:53:17.328147    5049 api_server.go:88] waiting for apiserver healthz status ...
	I1008 21:53:17.328165    5049 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1008 21:53:17.348062    5049 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1008 21:53:17.349361    5049 api_server.go:141] control plane version: v1.34.1
	I1008 21:53:17.349387    5049 api_server.go:131] duration metric: took 21.233702ms to wait for apiserver health ...
	I1008 21:53:17.349397    5049 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 21:53:17.355660    5049 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 21:53:17.355694    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:17.436890    5049 system_pods.go:59] 19 kube-system pods found
	I1008 21:53:17.436945    5049 system_pods.go:61] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:17.436956    5049 system_pods.go:61] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:17.436963    5049 system_pods.go:61] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending
	I1008 21:53:17.436972    5049 system_pods.go:61] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending
	I1008 21:53:17.436976    5049 system_pods.go:61] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:17.436988    5049 system_pods.go:61] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:17.436997    5049 system_pods.go:61] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:17.437016    5049 system_pods.go:61] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:17.437033    5049 system_pods.go:61] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:17.437055    5049 system_pods.go:61] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:17.437061    5049 system_pods.go:61] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:17.437070    5049 system_pods.go:61] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:17.437080    5049 system_pods.go:61] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending
	I1008 21:53:17.437093    5049 system_pods.go:61] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending
	I1008 21:53:17.437105    5049 system_pods.go:61] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:17.437110    5049 system_pods.go:61] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending
	I1008 21:53:17.437119    5049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.437129    5049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending
	I1008 21:53:17.437134    5049 system_pods.go:61] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending
	I1008 21:53:17.437140    5049 system_pods.go:74] duration metric: took 87.737742ms to wait for pod list to return data ...
	I1008 21:53:17.437153    5049 default_sa.go:34] waiting for default service account to be created ...
	I1008 21:53:17.442182    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:17.499354    5049 default_sa.go:45] found service account: "default"
	I1008 21:53:17.499383    5049 default_sa.go:55] duration metric: took 62.223651ms for default service account to be created ...
	I1008 21:53:17.499394    5049 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 21:53:17.513684    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:17.513731    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:17.513748    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:17.513754    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending
	I1008 21:53:17.513763    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending
	I1008 21:53:17.513767    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:17.513772    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:17.513783    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:17.513787    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:17.513795    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:17.513813    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:17.513819    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:17.513825    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:17.513835    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending
	I1008 21:53:17.513839    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending
	I1008 21:53:17.513845    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:17.513850    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending
	I1008 21:53:17.513860    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.513868    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending
	I1008 21:53:17.513872    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending
	I1008 21:53:17.513897    5049 retry.go:31] will retry after 225.672799ms: missing components: kube-dns
	I1008 21:53:17.603976    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:17.767008    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:17.767045    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:17.767065    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:17.767075    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:17.767082    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending
	I1008 21:53:17.767091    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:17.767096    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:17.767100    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:17.767113    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:17.767120    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:17.767139    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:17.767145    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:17.767157    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:17.767164    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:17.767171    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:17.767180    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:17.767186    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:17.767198    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.767210    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.767222    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:17.767237    5049 retry.go:31] will retry after 315.53954ms: missing components: kube-dns
	I1008 21:53:17.767699    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:17.835784    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:17.835968    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:17.880988    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:18.089575    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:18.089614    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:18.089624    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:18.089668    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:18.089678    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:18.089686    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:18.089691    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:18.089699    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:18.089703    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:18.089709    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:18.089731    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:18.089736    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:18.089742    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:18.089755    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:18.089764    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:18.089788    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:18.089800    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:18.089807    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.089819    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.089825    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:18.089846    5049 retry.go:31] will retry after 435.438173ms: missing components: kube-dns
	I1008 21:53:18.248257    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:18.354990    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:18.355061    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:18.455065    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:18.531443    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:18.531482    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:18.531492    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:18.531500    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:18.531528    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:18.531539    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:18.531546    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:18.531550    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:18.531555    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:18.531568    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:18.531573    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:18.531587    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:18.531598    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:18.531605    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:18.531611    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:18.531625    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:18.531634    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:18.531643    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.531650    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.531665    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:18.531686    5049 retry.go:31] will retry after 410.644437ms: missing components: kube-dns
	I1008 21:53:18.721696    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:18.822189    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:18.822379    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:18.923861    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:18.949777    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:18.949874    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:18.949900    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:18.949944    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:18.949975    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:18.950005    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:18.950027    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:18.950056    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:18.950080    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:18.950110    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:18.950140    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:18.950178    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:18.950199    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:18.950226    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:18.950273    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:18.950304    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:18.950332    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:18.950359    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.950392    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.950428    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:18.950537    5049 retry.go:31] will retry after 475.838949ms: missing components: kube-dns
	I1008 21:53:18.984627    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.380610409s)
	W1008 21:53:18.984715    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:18.984756    5049 retry.go:31] will retry after 14.372601313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:19.220359    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:19.295253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:19.322740    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:19.422124    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:19.434130    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:19.436339    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Running
	I1008 21:53:19.436376    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:19.436386    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:19.436396    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:19.436400    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:19.436406    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:19.436410    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:19.436415    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:19.436422    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:19.436426    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:19.436431    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:19.436438    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:19.436446    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:19.436456    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:19.436462    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:19.436470    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:19.436478    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:19.436485    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:19.436491    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Running
	I1008 21:53:19.436500    5049 system_pods.go:126] duration metric: took 1.937099519s to wait for k8s-apps to be running ...
	I1008 21:53:19.436507    5049 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 21:53:19.436565    5049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 21:53:19.452123    5049 system_svc.go:56] duration metric: took 15.607346ms WaitForService to wait for kubelet
	I1008 21:53:19.452203    5049 kubeadm.go:586] duration metric: took 44.376522674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 21:53:19.452238    5049 node_conditions.go:102] verifying NodePressure condition ...
	I1008 21:53:19.455605    5049 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 21:53:19.455639    5049 node_conditions.go:123] node cpu capacity is 2
	I1008 21:53:19.455653    5049 node_conditions.go:105] duration metric: took 3.396177ms to run NodePressure ...
	I1008 21:53:19.455667    5049 start.go:241] waiting for startup goroutines ...
	I1008 21:53:19.719830    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:19.788179    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:19.805468    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:19.874886    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:20.220168    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:20.287114    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:20.305601    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:20.374952    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:20.720566    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:20.822315    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:20.822787    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:20.875314    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:21.220330    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:21.287613    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:21.305322    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:21.374688    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:21.720632    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:21.787963    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:21.805912    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:21.874379    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:22.219752    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:22.288081    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:22.305693    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:22.375036    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:22.720724    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:22.787639    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:22.805667    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:22.874493    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:23.220521    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:23.287513    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:23.306310    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:23.406745    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:23.720310    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:23.787309    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:23.806037    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:23.874743    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:24.220470    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:24.287343    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:24.308794    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:24.375157    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:24.719831    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:24.787871    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:24.806189    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:24.874843    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:25.220379    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:25.288080    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:25.305542    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:25.374800    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:25.723037    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:25.788101    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:25.805445    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:25.874989    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:26.221019    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:26.289342    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:26.306316    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:26.379551    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:26.719698    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:26.787252    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:26.805168    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:26.886566    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:27.220075    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:27.287021    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:27.306375    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:27.378327    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:27.721722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:27.787946    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:27.805692    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:27.875590    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:28.220001    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:28.287571    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:28.306499    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:28.380233    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:28.720909    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:28.791710    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:28.806840    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:28.876414    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:29.219507    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:29.287605    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:29.306299    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:29.374383    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:29.719724    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:29.787756    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:29.805745    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:29.874661    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:30.219788    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:30.288146    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:30.305211    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:30.374331    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:30.720313    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:30.787030    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:30.805837    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:30.874348    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:31.220876    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:31.288053    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:31.305798    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:31.374619    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:31.720338    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:31.787260    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:31.805730    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:31.874556    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:32.219882    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:32.287575    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:32.305561    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:32.375508    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:32.720055    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:32.787479    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:32.805503    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:32.874796    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:33.220416    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:33.287616    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:33.306583    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:33.357846    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:33.374660    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:33.727197    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:33.787931    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:33.805710    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:33.875465    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:34.220401    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:34.320944    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:34.321760    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:34.382016    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:34.603130    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.245251462s)
	W1008 21:53:34.603179    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:34.603199    5049 retry.go:31] will retry after 14.7472332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:34.720642    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:34.787502    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:34.805802    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:34.874879    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:35.220069    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:35.288124    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:35.305313    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:35.373779    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:35.720036    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:35.787838    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:35.805664    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:35.874331    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:36.220431    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:36.287061    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:36.305227    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:36.374850    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:36.721738    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:36.788289    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:36.806208    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:36.874715    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:37.219803    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:37.287666    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:37.305714    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:37.380658    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:37.720722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:37.787253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:37.806173    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:37.874979    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:38.220952    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:38.287754    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:38.306583    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:38.374841    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:38.719963    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:38.787806    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:38.806106    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:38.875058    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:39.220567    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:39.287431    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:39.305659    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:39.374845    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:39.720588    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:39.787486    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:39.805969    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:39.874266    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:40.219925    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:40.287991    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:40.305176    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:40.374374    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:40.720022    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:40.786972    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:40.805415    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:40.874677    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:41.219534    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:41.287479    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:41.305731    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:41.375353    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:41.719819    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:41.787749    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:41.806435    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:41.874996    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:42.226068    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:42.288892    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:42.307542    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:42.375919    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:42.720992    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:42.788382    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:42.805819    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:42.875408    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:43.219639    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:43.293390    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:43.306810    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:43.379240    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:43.720908    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:43.788266    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:43.805545    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:43.875887    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:44.222864    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:44.288504    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:44.306895    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:44.375659    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:44.720662    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:44.822601    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:44.823648    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:44.922062    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:45.221301    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:45.288524    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:45.310276    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:45.375487    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:45.720326    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:45.787641    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:45.805710    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:45.874874    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:46.220585    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:46.287602    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:46.305781    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:46.374827    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:46.719831    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:46.788729    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:46.805848    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:46.873877    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:47.220226    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:47.287306    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:47.305607    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:47.375045    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:47.722134    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:47.787928    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:47.806000    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:47.874123    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:48.220253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:48.287278    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:48.305957    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:48.374861    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:48.724890    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:48.795243    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:48.807550    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:48.875309    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:49.220189    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:49.286802    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:49.305718    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:49.350693    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:49.373736    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:49.720086    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:49.787065    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:49.805167    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:49.875059    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:50.220156    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:50.287475    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:50.306491    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:50.375334    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:50.621158    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.270430064s)
	W1008 21:53:50.621248    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:50.621282    5049 retry.go:31] will retry after 17.302032834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:50.721155    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:50.787724    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:50.806662    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:50.875892    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:51.220758    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:51.288324    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:51.308573    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:51.373861    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:51.720719    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:51.787772    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:51.806411    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:51.875370    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:52.219145    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:52.287096    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:52.306783    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:52.374940    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:52.720538    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:52.787777    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:52.806554    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:52.874638    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:53.219386    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:53.287258    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:53.305401    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:53.374019    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:53.719645    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:53.787771    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:53.806313    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:53.874327    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:54.220384    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:54.287689    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:54.311587    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:54.374224    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:54.719913    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:54.787202    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:54.805078    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:54.873904    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:55.219955    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:55.287022    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:55.305894    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:55.374589    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:55.719706    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:55.788249    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:55.805327    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:55.876795    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:56.220043    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:56.287016    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:56.305848    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:56.374747    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:56.720182    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:56.787949    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:56.806614    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:56.874746    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:57.219736    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:57.287555    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:57.305573    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:57.374231    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:57.720386    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:57.787601    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:57.806174    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:57.874027    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:58.220386    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:58.287848    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:58.306000    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:58.374856    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:58.719326    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:58.787432    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:58.805977    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:58.875079    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:59.221208    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:59.287190    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:59.305753    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:59.374825    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:59.720121    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:59.787319    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:59.805170    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:59.873888    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:00.248445    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:00.355667    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:00.356813    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:00.376533    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:00.720091    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:00.821012    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:00.821463    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:00.874730    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:01.222103    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:01.323193    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:01.323357    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:01.374488    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:01.719745    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:01.787602    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:01.805594    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:01.874291    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:02.221141    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:02.323524    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:02.323671    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:02.375184    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:02.719938    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:02.820749    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:02.820568    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:02.875535    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:03.220120    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:03.287175    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:03.305678    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:03.374833    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:03.719983    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:03.787154    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:03.805289    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:03.874119    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:04.219624    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:04.288108    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:04.305169    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:04.374742    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:04.720262    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:04.822132    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:04.822363    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:04.874681    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:05.220066    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:05.287145    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:05.305955    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:05.374998    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:05.719522    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:05.787616    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:05.805872    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:05.874187    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:06.219460    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:06.287837    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:06.305444    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:06.374903    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:06.720517    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:06.789010    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:06.805856    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:06.873903    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:07.221534    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:07.287908    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:07.304707    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:07.374816    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:07.720912    5049 kapi.go:107] duration metric: took 1m27.504464829s to wait for kubernetes.io/minikube-addons=registry ...
	I1008 21:54:07.787090    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:07.804977    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:07.873923    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:07.923831    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:54:08.287107    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:08.305869    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:08.377568    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:08.787870    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:08.890669    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:08.891199    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:09.287711    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:09.312005    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:09.350486    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.426615075s)
	W1008 21:54:09.350560    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 21:54:09.350657    5049 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 21:54:09.375222    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:09.787404    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:09.805974    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:09.873870    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:10.287137    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:10.305857    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:10.374149    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:10.787141    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:10.805015    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:10.874130    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:11.287249    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:11.305145    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:11.374082    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:11.787120    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:11.805907    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:11.873678    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:12.288532    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:12.306466    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:12.375267    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:12.787689    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:12.806188    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:12.875433    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:13.287302    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:13.305160    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:13.374825    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:13.787409    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:13.806704    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:13.875219    5049 kapi.go:107] duration metric: took 1m32.004327123s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1008 21:54:14.288207    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:14.306111    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:14.787036    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:14.805178    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:15.287722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:15.306527    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:15.787866    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:15.807056    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:16.287804    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:16.305557    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:16.786968    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:16.805300    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:17.288219    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:17.305499    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:17.802383    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:17.808753    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:18.287274    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:18.305318    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:18.788911    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:18.805953    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:19.287842    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:19.391549    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:19.788261    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:19.805943    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:20.287645    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:20.305504    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:20.794183    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:20.805280    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:21.289146    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:21.311351    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:21.787909    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:21.805456    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:22.288128    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:22.305521    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:22.787327    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:22.805381    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:23.288156    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:23.305115    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:23.788075    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:23.805594    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:24.287567    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:24.306339    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:24.792439    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:24.806205    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:25.288004    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:25.305186    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:25.789717    5049 kapi.go:107] duration metric: took 1m41.5057499s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1008 21:54:25.793157    5049 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-961288 cluster.
	I1008 21:54:25.796078    5049 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1008 21:54:25.798998    5049 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1008 21:54:25.812639    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:26.306246    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:26.813055    5049 kapi.go:107] duration metric: took 1m44.511130471s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1008 21:54:26.816171    5049 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, registry-creds, default-storageclass, ingress-dns, storage-provisioner, amd-gpu-device-plugin, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1008 21:54:26.820126    5049 addons.go:514] duration metric: took 1m51.743976161s for enable addons: enabled=[cloud-spanner nvidia-device-plugin registry-creds default-storageclass ingress-dns storage-provisioner amd-gpu-device-plugin metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1008 21:54:26.820177    5049 start.go:246] waiting for cluster config update ...
	I1008 21:54:26.820199    5049 start.go:255] writing updated cluster config ...
	I1008 21:54:26.820517    5049 ssh_runner.go:195] Run: rm -f paused
	I1008 21:54:26.825970    5049 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 21:54:26.830723    5049 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-44hjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.840820    5049 pod_ready.go:94] pod "coredns-66bc5c9577-44hjj" is "Ready"
	I1008 21:54:26.840849    5049 pod_ready.go:86] duration metric: took 10.092612ms for pod "coredns-66bc5c9577-44hjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.843863    5049 pod_ready.go:83] waiting for pod "etcd-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.848319    5049 pod_ready.go:94] pod "etcd-addons-961288" is "Ready"
	I1008 21:54:26.848350    5049 pod_ready.go:86] duration metric: took 4.460194ms for pod "etcd-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.850789    5049 pod_ready.go:83] waiting for pod "kube-apiserver-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.857871    5049 pod_ready.go:94] pod "kube-apiserver-addons-961288" is "Ready"
	I1008 21:54:26.857896    5049 pod_ready.go:86] duration metric: took 7.07442ms for pod "kube-apiserver-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.860444    5049 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:27.230174    5049 pod_ready.go:94] pod "kube-controller-manager-addons-961288" is "Ready"
	I1008 21:54:27.230203    5049 pod_ready.go:86] duration metric: took 369.733345ms for pod "kube-controller-manager-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:27.430597    5049 pod_ready.go:83] waiting for pod "kube-proxy-xq75f" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:27.830628    5049 pod_ready.go:94] pod "kube-proxy-xq75f" is "Ready"
	I1008 21:54:27.830702    5049 pod_ready.go:86] duration metric: took 400.040344ms for pod "kube-proxy-xq75f" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:28.030548    5049 pod_ready.go:83] waiting for pod "kube-scheduler-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:28.430413    5049 pod_ready.go:94] pod "kube-scheduler-addons-961288" is "Ready"
	I1008 21:54:28.430445    5049 pod_ready.go:86] duration metric: took 399.864471ms for pod "kube-scheduler-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:28.430460    5049 pod_ready.go:40] duration metric: took 1.60445921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 21:54:28.835165    5049 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 21:54:28.838442    5049 out.go:179] * Done! kubectl is now configured to use "addons-961288" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 21:57:30 addons-961288 crio[829]: time="2025-10-08T21:57:30.091558846Z" level=info msg="Removed container 74abdbfad61941f3b80e39d9e0604533355ea6573407b3c7df1c041a1add08d6: kube-system/registry-creds-764b6fb674-jqkzb/registry-creds" id=978bf29a-add9-4a54-a1ed-807b39205e94 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.017196236Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-t6zjl/POD" id=d889e001-a20e-4584-9087-930a4ba6b3cc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.01727151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.042459957Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-t6zjl Namespace:default ID:f1826506a97c3607a09e93337bac6639ea5d11dc019348b06bf2777ab8d72dfd UID:206685e1-f1f7-4e15-8e09-6159721a76b8 NetNS:/var/run/netns/6f6e8596-c35b-401b-a4f7-8e9dff6090be Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ed30}] Aliases:map[]}"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.042653534Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-t6zjl to CNI network \"kindnet\" (type=ptp)"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.05888562Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-t6zjl Namespace:default ID:f1826506a97c3607a09e93337bac6639ea5d11dc019348b06bf2777ab8d72dfd UID:206685e1-f1f7-4e15-8e09-6159721a76b8 NetNS:/var/run/netns/6f6e8596-c35b-401b-a4f7-8e9dff6090be Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ed30}] Aliases:map[]}"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.05927032Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-t6zjl for CNI network kindnet (type=ptp)"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.068101388Z" level=info msg="Ran pod sandbox f1826506a97c3607a09e93337bac6639ea5d11dc019348b06bf2777ab8d72dfd with infra container: default/hello-world-app-5d498dc89-t6zjl/POD" id=d889e001-a20e-4584-9087-930a4ba6b3cc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.069477169Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=dae7520c-9e08-48ca-8d31-510bf864bfe5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.071060788Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=dae7520c-9e08-48ca-8d31-510bf864bfe5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.07112222Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=dae7520c-9e08-48ca-8d31-510bf864bfe5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.072168965Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=420d39e1-d30d-418b-9925-b8bfb3568962 name=/runtime.v1.ImageService/PullImage
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.075564544Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.689767632Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=420d39e1-d30d-418b-9925-b8bfb3568962 name=/runtime.v1.ImageService/PullImage
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.69034316Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ef6510dd-f572-4d52-b038-eef635b37391 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.694416456Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=11ef427d-1b9b-4e73-8fc1-e5f14e1ce2c2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.702758494Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-t6zjl/hello-world-app" id=f67f8253-823c-40f1-8102-5226552c7e1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.703887069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.712817518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.713012547Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a7c802093f9ec78fb9bc1b69caeade27c7c5791be5c96fa7dfa298d1cbc95510/merged/etc/passwd: no such file or directory"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.713041307Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a7c802093f9ec78fb9bc1b69caeade27c7c5791be5c96fa7dfa298d1cbc95510/merged/etc/group: no such file or directory"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.713293854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.736918071Z" level=info msg="Created container a219481c903054dfb512b331847a96bf2eb781005b07f4692cc456ce4eb7acad: default/hello-world-app-5d498dc89-t6zjl/hello-world-app" id=f67f8253-823c-40f1-8102-5226552c7e1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.738259506Z" level=info msg="Starting container: a219481c903054dfb512b331847a96bf2eb781005b07f4692cc456ce4eb7acad" id=89efe756-cc97-407d-80c1-fb648d4b81b0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 21:57:38 addons-961288 crio[829]: time="2025-10-08T21:57:38.740686314Z" level=info msg="Started container" PID=7202 containerID=a219481c903054dfb512b331847a96bf2eb781005b07f4692cc456ce4eb7acad description=default/hello-world-app-5d498dc89-t6zjl/hello-world-app id=89efe756-cc97-407d-80c1-fb648d4b81b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f1826506a97c3607a09e93337bac6639ea5d11dc019348b06bf2777ab8d72dfd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	a219481c90305       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   f1826506a97c3       hello-world-app-5d498dc89-t6zjl            default
	c14d6b3006094       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             10 seconds ago           Exited              registry-creds                           1                   3555851f7cd44       registry-creds-764b6fb674-jqkzb            kube-system
	353b23aea8431       docker.io/library/nginx@sha256:9388e9644d1118a705af691f800b926c4683665f1f748234e1289add5f5a95cd                                              2 minutes ago            Running             nginx                                    0                   86ee4aca1ea2c       nginx                                      default
	11383bed5f922       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   e053760515e17       busybox                                    default
	1b176619cba2b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                   kube-system
	844fd610070b1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   d7923dab72b20       gcp-auth-78565c9fb4-7bx27                  gcp-auth
	6914889d561d2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                   kube-system
	04cded645e0f5       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                   kube-system
	53f8bbdff2a61       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                   kube-system
	9e0cfc150cb8b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                   kube-system
	0d2846783262d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   1ebe1d90a66bb       gadget-dz94f                               gadget
	887c67d1d3ec6       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             3 minutes ago            Running             controller                               0                   6274a4e2f0743       ingress-nginx-controller-9cc49f96f-p8cl5   ingress-nginx
	2ee4ab9224d4e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   b10801d52efbc       registry-proxy-f8ff7                       kube-system
	7288cfd067650       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   0d5dc5535135f       csi-hostpath-resizer-0                     kube-system
	83d5f8807dd5a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   63500abcdcf30       snapshot-controller-7d9fbc56b8-vw7qn       kube-system
	ea664b7087a7a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   3 minutes ago            Exited              patch                                    0                   47d4059f39887       ingress-nginx-admission-patch-kp9qj        ingress-nginx
	3627170a702a8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   a865109900bcf       local-path-provisioner-648f6765c9-mlxh6    local-path-storage
	d1380cc21067a       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   a47f2c58c42e3       nvidia-device-plugin-daemonset-fsrx4       kube-system
	ff8f96680aca4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                   kube-system
	39cf7b8150b29       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           3 minutes ago            Running             registry                                 0                   66acba301eda5       registry-66898fdd98-sbgsn                  kube-system
	e989c71cd7b8b       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   fa0b34a1a868c       kube-ingress-dns-minikube                  kube-system
	05f18db74839d       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   1600f0c0374a3       yakd-dashboard-5ff678cb9-vhcqh             yakd-dashboard
	b0a301ec5750f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   4 minutes ago            Exited              create                                   0                   3bcb2ca0af46e       ingress-nginx-admission-create-9d26x       ingress-nginx
	dad9a565111fe       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   0c39840ea7a89       csi-hostpath-attacher-0                    kube-system
	a1add4f38e67c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   b148989f3e511       metrics-server-85b7d694d7-kwc69            kube-system
	b6beebcffc7ee       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   74163529f02f2       snapshot-controller-7d9fbc56b8-5cc8z       kube-system
	2d42cedd8f1ba       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   2520491499e00       cloud-spanner-emulator-86bd5cbb97-46cw7    default
	d80e987069480       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   d082d2eb58ad9       storage-provisioner                        kube-system
	d8507d936e30a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   f6ddd1c88ebc6       coredns-66bc5c9577-44hjj                   kube-system
	3d83973804a8c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   70f851a80c361       kindnet-6rwkn                              kube-system
	02c59261c1cab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   da932c3af748e       kube-proxy-xq75f                           kube-system
	12f7556456c3b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   ae58ce69f3633       kube-scheduler-addons-961288               kube-system
	c21bc28053396       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   4b85467e19714       kube-apiserver-addons-961288               kube-system
	6a475d38a34a2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   302397bedbe0e       etcd-addons-961288                         kube-system
	a2d50687425bc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   6aee562fdfb4e       kube-controller-manager-addons-961288      kube-system
	
	
	==> coredns [d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4] <==
	[INFO] 10.244.0.11:33758 - 4619 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001642118s
	[INFO] 10.244.0.11:33758 - 7705 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116809s
	[INFO] 10.244.0.11:33758 - 44340 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000323735s
	[INFO] 10.244.0.11:51714 - 31411 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00019059s
	[INFO] 10.244.0.11:51714 - 31214 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000413549s
	[INFO] 10.244.0.11:39033 - 36948 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000244588s
	[INFO] 10.244.0.11:39033 - 36767 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000209051s
	[INFO] 10.244.0.11:45557 - 25295 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103811s
	[INFO] 10.244.0.11:45557 - 24860 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151575s
	[INFO] 10.244.0.11:40368 - 62307 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001388184s
	[INFO] 10.244.0.11:40368 - 62495 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001497017s
	[INFO] 10.244.0.11:55237 - 8743 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000117909s
	[INFO] 10.244.0.11:55237 - 8563 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008215s
	[INFO] 10.244.0.21:43278 - 13572 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373697s
	[INFO] 10.244.0.21:50712 - 3224 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000538284s
	[INFO] 10.244.0.21:53471 - 38019 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000146052s
	[INFO] 10.244.0.21:39073 - 32394 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012686s
	[INFO] 10.244.0.21:35754 - 29539 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119558s
	[INFO] 10.244.0.21:47768 - 50837 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085153s
	[INFO] 10.244.0.21:53416 - 8047 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002146564s
	[INFO] 10.244.0.21:44978 - 20256 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002307517s
	[INFO] 10.244.0.21:48873 - 23376 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002586107s
	[INFO] 10.244.0.21:42710 - 40745 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002571355s
	[INFO] 10.244.0.23:49432 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196621s
	[INFO] 10.244.0.23:58580 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112477s
	
	
	==> describe nodes <==
	Name:               addons-961288
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-961288
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=addons-961288
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T21_52_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-961288
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-961288"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 21:52:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-961288
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 21:57:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 21:57:35 +0000   Wed, 08 Oct 2025 21:52:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 21:57:35 +0000   Wed, 08 Oct 2025 21:52:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 21:57:35 +0000   Wed, 08 Oct 2025 21:52:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 21:57:35 +0000   Wed, 08 Oct 2025 21:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-961288
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 37daf547d8bc4acebb6a0460dc06380e
	  System UUID:                b425271d-6922-48ac-8987-93fee9234cf0
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     cloud-spanner-emulator-86bd5cbb97-46cw7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  default                     hello-world-app-5d498dc89-t6zjl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-dz94f                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  gcp-auth                    gcp-auth-78565c9fb4-7bx27                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-p8cl5    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-44hjj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m4s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpathplugin-ncxdq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 etcd-addons-961288                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m9s
	  kube-system                 kindnet-6rwkn                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m4s
	  kube-system                 kube-apiserver-addons-961288                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-addons-961288       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-proxy-xq75f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-961288                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 metrics-server-85b7d694d7-kwc69             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m
	  kube-system                 nvidia-device-plugin-daemonset-fsrx4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 registry-66898fdd98-sbgsn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 registry-creds-764b6fb674-jqkzb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 registry-proxy-f8ff7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-5cc8z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-vw7qn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  local-path-storage          local-path-provisioner-648f6765c9-mlxh6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vhcqh              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   Starting                 5m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m17s (x9 over 5m17s)  kubelet          Node addons-961288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node addons-961288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node addons-961288 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s                   kubelet          Node addons-961288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s                   kubelet          Node addons-961288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s                   kubelet          Node addons-961288 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m5s                   node-controller  Node addons-961288 event: Registered Node addons-961288 in Controller
	  Normal   NodeReady                4m23s                  kubelet          Node addons-961288 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015330] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.500107] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036203] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743682] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.166411] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 8 21:52] hrtimer: interrupt took 47692610 ns
	[ +22.956892] overlayfs: idmapped layers are currently not supported
	[  +0.073462] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233] <==
	{"level":"warn","ts":"2025-10-08T21:52:26.086800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.087627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.117820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.136808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.148935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.176217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.194608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.211679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.232989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.244647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.265898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.297965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.306467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.316220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.346701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.377720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.400169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.418076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.531509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:42.322512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:42.346151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.252602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.266810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.301588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.319224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [844fd610070b1570ffb554c8c62f56928ee0b1145a929df7408a160502839834] <==
	2025/10/08 21:54:25 GCP Auth Webhook started!
	2025/10/08 21:54:29 Ready to marshal response ...
	2025/10/08 21:54:29 Ready to write response ...
	2025/10/08 21:54:29 Ready to marshal response ...
	2025/10/08 21:54:29 Ready to write response ...
	2025/10/08 21:54:29 Ready to marshal response ...
	2025/10/08 21:54:29 Ready to write response ...
	2025/10/08 21:54:50 Ready to marshal response ...
	2025/10/08 21:54:50 Ready to write response ...
	2025/10/08 21:54:51 Ready to marshal response ...
	2025/10/08 21:54:51 Ready to write response ...
	2025/10/08 21:54:51 Ready to marshal response ...
	2025/10/08 21:54:51 Ready to write response ...
	2025/10/08 21:55:00 Ready to marshal response ...
	2025/10/08 21:55:00 Ready to write response ...
	2025/10/08 21:55:07 Ready to marshal response ...
	2025/10/08 21:55:07 Ready to write response ...
	2025/10/08 21:55:17 Ready to marshal response ...
	2025/10/08 21:55:17 Ready to write response ...
	2025/10/08 21:55:39 Ready to marshal response ...
	2025/10/08 21:55:39 Ready to write response ...
	2025/10/08 21:57:37 Ready to marshal response ...
	2025/10/08 21:57:37 Ready to write response ...
	
	
	==> kernel <==
	 21:57:39 up 40 min,  0 user,  load average: 0.49, 1.04, 0.58
	Linux addons-961288 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6] <==
	I1008 21:55:36.422320       1 main.go:301] handling current node
	I1008 21:55:46.425008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:55:46.425053       1 main.go:301] handling current node
	I1008 21:55:56.427094       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:55:56.427132       1 main.go:301] handling current node
	I1008 21:56:06.429180       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:56:06.429213       1 main.go:301] handling current node
	I1008 21:56:16.420729       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:56:16.420773       1 main.go:301] handling current node
	I1008 21:56:26.429753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:56:26.429896       1 main.go:301] handling current node
	I1008 21:56:36.427618       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:56:36.427652       1 main.go:301] handling current node
	I1008 21:56:46.427617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:56:46.427729       1 main.go:301] handling current node
	I1008 21:56:56.428976       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:56:56.429095       1 main.go:301] handling current node
	I1008 21:57:06.425363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:57:06.425465       1 main.go:301] handling current node
	I1008 21:57:16.427937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:57:16.427973       1 main.go:301] handling current node
	I1008 21:57:26.420714       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:57:26.420745       1 main.go:301] handling current node
	I1008 21:57:36.421774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:57:36.421887       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c] <==
	W1008 21:53:04.252012       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:04.266815       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:04.301441       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:04.316823       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:17.003488       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.229.178:443: connect: connection refused
	E1008 21:53:17.003570       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.229.178:443: connect: connection refused" logger="UnhandledError"
	W1008 21:53:17.005129       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.229.178:443: connect: connection refused
	E1008 21:53:17.005199       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.229.178:443: connect: connection refused" logger="UnhandledError"
	W1008 21:53:17.090452       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.229.178:443: connect: connection refused
	E1008 21:53:17.090504       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.229.178:443: connect: connection refused" logger="UnhandledError"
	E1008 21:53:37.464753       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.229.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.229.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.229.237:443: connect: connection refused" logger="UnhandledError"
	W1008 21:53:37.466218       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 21:53:37.466278       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 21:53:37.544922       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1008 21:53:37.582587       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1008 21:54:39.476370       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46598: use of closed network connection
	E1008 21:54:39.627218       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46616: use of closed network connection
	I1008 21:55:17.628966       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1008 21:55:17.915969       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.179.35"}
	I1008 21:55:18.908390       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1008 21:55:21.151891       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1008 21:57:37.862976       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.199.228"}
	
	
	==> kube-controller-manager [a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235] <==
	I1008 21:52:34.282117       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 21:52:34.283274       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 21:52:34.289767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 21:52:34.284581       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 21:52:34.289932       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 21:52:34.290181       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 21:52:34.284594       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 21:52:34.284611       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 21:52:34.284629       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 21:52:34.284640       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 21:52:34.284690       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 21:52:34.284935       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 21:52:34.285076       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 21:52:34.293844       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-961288" podCIDRs=["10.244.0.0/24"]
	E1008 21:52:39.588675       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1008 21:53:04.244983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 21:53:04.245207       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1008 21:53:04.245252       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1008 21:53:04.290938       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1008 21:53:04.295096       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1008 21:53:04.345901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 21:53:04.395567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 21:53:19.238326       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1008 21:53:34.351200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 21:53:34.411810       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd] <==
	I1008 21:52:36.266014       1 server_linux.go:53] "Using iptables proxy"
	I1008 21:52:36.343332       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 21:52:36.443490       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 21:52:36.443533       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1008 21:52:36.443613       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 21:52:36.499863       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 21:52:36.499910       1 server_linux.go:132] "Using iptables Proxier"
	I1008 21:52:36.515830       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 21:52:36.530062       1 server.go:527] "Version info" version="v1.34.1"
	I1008 21:52:36.530088       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 21:52:36.531946       1 config.go:200] "Starting service config controller"
	I1008 21:52:36.531960       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 21:52:36.531985       1 config.go:106] "Starting endpoint slice config controller"
	I1008 21:52:36.531990       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 21:52:36.532001       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 21:52:36.532005       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 21:52:36.532638       1 config.go:309] "Starting node config controller"
	I1008 21:52:36.532646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 21:52:36.532652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 21:52:36.632570       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 21:52:36.632621       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 21:52:36.632659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b] <==
	E1008 21:52:27.444580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 21:52:27.444625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 21:52:27.448241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 21:52:27.448519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 21:52:27.448707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 21:52:27.448808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 21:52:27.449002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 21:52:27.449103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 21:52:27.449259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 21:52:27.449377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 21:52:28.273711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 21:52:28.303588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 21:52:28.311644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1008 21:52:28.337273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 21:52:28.394807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 21:52:28.416533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 21:52:28.459145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 21:52:28.461462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 21:52:28.480047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 21:52:28.482761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 21:52:28.592458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 21:52:28.611652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 21:52:28.637146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 21:52:28.646613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1008 21:52:31.036660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 21:55:48 addons-961288 kubelet[1286]: I1008 21:55:48.163812    1286 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7cf5a056-9ee8-4a3b-97b4-93603919dbff-gcp-creds\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:55:48 addons-961288 kubelet[1286]: I1008 21:55:48.170473    1286 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-62c1fd2b-96d6-4abe-805c-2b74b9cf4ba8" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^87219732-a491-11f0-96ba-429e310877f9") on node "addons-961288"
	Oct 08 21:55:48 addons-961288 kubelet[1286]: I1008 21:55:48.213537    1286 scope.go:117] "RemoveContainer" containerID="9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f"
	Oct 08 21:55:48 addons-961288 kubelet[1286]: I1008 21:55:48.224109    1286 scope.go:117] "RemoveContainer" containerID="9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f"
	Oct 08 21:55:48 addons-961288 kubelet[1286]: E1008 21:55:48.224832    1286 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f\": container with ID starting with 9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f not found: ID does not exist" containerID="9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f"
	Oct 08 21:55:48 addons-961288 kubelet[1286]: I1008 21:55:48.224870    1286 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f"} err="failed to get container status \"9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f\": rpc error: code = NotFound desc = could not find container \"9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f\": container with ID starting with 9fd350e8ae8eda5739f9ecfad249cae7fc7e67fb7a69f1cd8d7a585d69d2708f not found: ID does not exist"
	Oct 08 21:55:48 addons-961288 kubelet[1286]: I1008 21:55:48.264162    1286 reconciler_common.go:299] "Volume detached for volume \"pvc-62c1fd2b-96d6-4abe-805c-2b74b9cf4ba8\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^87219732-a491-11f0-96ba-429e310877f9\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:55:49 addons-961288 kubelet[1286]: I1008 21:55:49.915961    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf5a056-9ee8-4a3b-97b4-93603919dbff" path="/var/lib/kubelet/pods/7cf5a056-9ee8-4a3b-97b4-93603919dbff/volumes"
	Oct 08 21:55:55 addons-961288 kubelet[1286]: I1008 21:55:55.912470    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-sbgsn" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:56:33 addons-961288 kubelet[1286]: I1008 21:56:33.913739    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-f8ff7" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:56:45 addons-961288 kubelet[1286]: I1008 21:56:45.913444    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fsrx4" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:57:13 addons-961288 kubelet[1286]: I1008 21:57:13.912422    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-sbgsn" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:57:27 addons-961288 kubelet[1286]: I1008 21:57:27.312855    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jqkzb" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:57:29 addons-961288 kubelet[1286]: I1008 21:57:29.587413    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jqkzb" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:57:29 addons-961288 kubelet[1286]: I1008 21:57:29.587467    1286 scope.go:117] "RemoveContainer" containerID="74abdbfad61941f3b80e39d9e0604533355ea6573407b3c7df1c041a1add08d6"
	Oct 08 21:57:30 addons-961288 kubelet[1286]: I1008 21:57:30.077196    1286 scope.go:117] "RemoveContainer" containerID="74abdbfad61941f3b80e39d9e0604533355ea6573407b3c7df1c041a1add08d6"
	Oct 08 21:57:30 addons-961288 kubelet[1286]: I1008 21:57:30.592644    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jqkzb" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:57:30 addons-961288 kubelet[1286]: I1008 21:57:30.592700    1286 scope.go:117] "RemoveContainer" containerID="c14d6b3006094968eb636488ec1cab7d951c47fa88b2ca08ef1b5f942eaf2ba7"
	Oct 08 21:57:30 addons-961288 kubelet[1286]: E1008 21:57:30.592849    1286 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-jqkzb_kube-system(8c01014d-f946-46ff-a3a7-33fb2c409449)\"" pod="kube-system/registry-creds-764b6fb674-jqkzb" podUID="8c01014d-f946-46ff-a3a7-33fb2c409449"
	Oct 08 21:57:31 addons-961288 kubelet[1286]: I1008 21:57:31.595746    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-jqkzb" secret="" err="secret \"gcp-auth\" not found"
	Oct 08 21:57:31 addons-961288 kubelet[1286]: I1008 21:57:31.595804    1286 scope.go:117] "RemoveContainer" containerID="c14d6b3006094968eb636488ec1cab7d951c47fa88b2ca08ef1b5f942eaf2ba7"
	Oct 08 21:57:31 addons-961288 kubelet[1286]: E1008 21:57:31.595966    1286 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-jqkzb_kube-system(8c01014d-f946-46ff-a3a7-33fb2c409449)\"" pod="kube-system/registry-creds-764b6fb674-jqkzb" podUID="8c01014d-f946-46ff-a3a7-33fb2c409449"
	Oct 08 21:57:37 addons-961288 kubelet[1286]: I1008 21:57:37.779784    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/206685e1-f1f7-4e15-8e09-6159721a76b8-gcp-creds\") pod \"hello-world-app-5d498dc89-t6zjl\" (UID: \"206685e1-f1f7-4e15-8e09-6159721a76b8\") " pod="default/hello-world-app-5d498dc89-t6zjl"
	Oct 08 21:57:37 addons-961288 kubelet[1286]: I1008 21:57:37.779835    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlxwf\" (UniqueName: \"kubernetes.io/projected/206685e1-f1f7-4e15-8e09-6159721a76b8-kube-api-access-zlxwf\") pod \"hello-world-app-5d498dc89-t6zjl\" (UID: \"206685e1-f1f7-4e15-8e09-6159721a76b8\") " pod="default/hello-world-app-5d498dc89-t6zjl"
	Oct 08 21:57:38 addons-961288 kubelet[1286]: W1008 21:57:38.067423    1286 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/crio-f1826506a97c3607a09e93337bac6639ea5d11dc019348b06bf2777ab8d72dfd WatchSource:0}: Error finding container f1826506a97c3607a09e93337bac6639ea5d11dc019348b06bf2777ab8d72dfd: Status 404 returned error can't find the container with id f1826506a97c3607a09e93337bac6639ea5d11dc019348b06bf2777ab8d72dfd
	
	
	==> storage-provisioner [d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2] <==
	W1008 21:57:15.763925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:17.767284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:17.771957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:19.775337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:19.783157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:21.786474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:21.792351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:23.795580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:23.799929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:25.803216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:25.807561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:27.810946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:27.818988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:29.825831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:29.832671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:31.835384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:31.839817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:33.842529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:33.849135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:35.851886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:35.856580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:37.883788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:37.897237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:39.901954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:57:39.907237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-961288 -n addons-961288
helpers_test.go:269: (dbg) Run:  kubectl --context addons-961288 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-961288 describe pod ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-961288 describe pod ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj: exit status 1 (95.519567ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9d26x" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kp9qj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-961288 describe pod ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (310.635722ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:57:41.118485   14691 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:57:41.118755   14691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:57:41.118786   14691 out.go:374] Setting ErrFile to fd 2...
	I1008 21:57:41.118806   14691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:57:41.119085   14691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:57:41.119411   14691 mustload.go:65] Loading cluster: addons-961288
	I1008 21:57:41.119833   14691 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:57:41.119872   14691 addons.go:606] checking whether the cluster is paused
	I1008 21:57:41.120017   14691 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:57:41.120052   14691 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:57:41.120581   14691 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:57:41.143559   14691 ssh_runner.go:195] Run: systemctl --version
	I1008 21:57:41.143612   14691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:57:41.177855   14691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:57:41.280739   14691 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:57:41.280833   14691 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:57:41.319863   14691 cri.go:89] found id: "c14d6b3006094968eb636488ec1cab7d951c47fa88b2ca08ef1b5f942eaf2ba7"
	I1008 21:57:41.319895   14691 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:57:41.319901   14691 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:57:41.319905   14691 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:57:41.319909   14691 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:57:41.319913   14691 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:57:41.319916   14691 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:57:41.319919   14691 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:57:41.319922   14691 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:57:41.319928   14691 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:57:41.319932   14691 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:57:41.319935   14691 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:57:41.319938   14691 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:57:41.319941   14691 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:57:41.319945   14691 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:57:41.319953   14691 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:57:41.319960   14691 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:57:41.319965   14691 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:57:41.319969   14691 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:57:41.319972   14691 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:57:41.319977   14691 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:57:41.319985   14691 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:57:41.319989   14691 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:57:41.319992   14691 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:57:41.319995   14691 cri.go:89] found id: ""
	I1008 21:57:41.320049   14691 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:57:41.337265   14691 out.go:203] 
	W1008 21:57:41.340243   14691 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:57:41.340274   14691 out.go:285] * 
	* 
	W1008 21:57:41.344630   14691 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:57:41.347681   14691 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable ingress --alsologtostderr -v=1: exit status 11 (282.963267ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:57:41.413453   14803 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:57:41.413875   14803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:57:41.413885   14803 out.go:374] Setting ErrFile to fd 2...
	I1008 21:57:41.413891   14803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:57:41.414151   14803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:57:41.414421   14803 mustload.go:65] Loading cluster: addons-961288
	I1008 21:57:41.414774   14803 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:57:41.414783   14803 addons.go:606] checking whether the cluster is paused
	I1008 21:57:41.414883   14803 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:57:41.414898   14803 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:57:41.415313   14803 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:57:41.443857   14803 ssh_runner.go:195] Run: systemctl --version
	I1008 21:57:41.443916   14803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:57:41.466945   14803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:57:41.576506   14803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:57:41.576589   14803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:57:41.606400   14803 cri.go:89] found id: "c14d6b3006094968eb636488ec1cab7d951c47fa88b2ca08ef1b5f942eaf2ba7"
	I1008 21:57:41.606423   14803 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:57:41.606429   14803 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:57:41.606433   14803 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:57:41.606437   14803 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:57:41.606441   14803 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:57:41.606444   14803 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:57:41.606468   14803 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:57:41.606477   14803 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:57:41.606548   14803 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:57:41.606555   14803 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:57:41.606559   14803 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:57:41.606562   14803 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:57:41.606565   14803 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:57:41.606569   14803 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:57:41.606573   14803 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:57:41.606579   14803 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:57:41.606585   14803 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:57:41.606588   14803 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:57:41.606591   14803 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:57:41.606596   14803 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:57:41.606599   14803 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:57:41.606603   14803 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:57:41.606606   14803 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:57:41.606610   14803 cri.go:89] found id: ""
	I1008 21:57:41.606664   14803 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:57:41.621712   14803 out.go:203] 
	W1008 21:57:41.624665   14803 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:57:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:57:41.624693   14803 out.go:285] * 
	* 
	W1008 21:57:41.629144   14803 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:57:41.632066   14803 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-dz94f" [11617e63-d84d-476b-8568-3183869022d3] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005794211s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (262.094421ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:17.090137   12728 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:17.090294   12728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:17.090312   12728 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:17.090317   12728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:17.090587   12728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:17.090873   12728 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:17.091225   12728 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:17.091242   12728 addons.go:606] checking whether the cluster is paused
	I1008 21:55:17.091344   12728 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:17.091362   12728 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:17.091813   12728 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:17.109510   12728 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:17.109568   12728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:17.129017   12728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:17.234377   12728 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:17.234532   12728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:17.269399   12728 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:17.269424   12728 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:17.269430   12728 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:17.269435   12728 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:17.269439   12728 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:17.269443   12728 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:17.269447   12728 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:17.269451   12728 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:17.269454   12728 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:17.269461   12728 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:17.269465   12728 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:17.269469   12728 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:17.269472   12728 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:17.269475   12728 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:17.269479   12728 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:17.269487   12728 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:17.269496   12728 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:17.269501   12728 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:17.269505   12728 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:17.269508   12728 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:17.269514   12728 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:17.269520   12728 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:17.269523   12728 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:17.269526   12728 cri.go:89] found id: ""
	I1008 21:55:17.269578   12728 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:17.284260   12728 out.go:203] 
	W1008 21:55:17.287031   12728 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:17.287063   12728 out.go:285] * 
	* 
	W1008 21:55:17.291552   12728 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:17.294457   12728 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.167435ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003251841s
addons_test.go:463: (dbg) Run:  kubectl --context addons-961288 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (304.48601ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:10.803677   12571 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:10.803811   12571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:10.803823   12571 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:10.803829   12571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:10.804143   12571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:10.804446   12571 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:10.804811   12571 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:10.804829   12571 addons.go:606] checking whether the cluster is paused
	I1008 21:55:10.804948   12571 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:10.804968   12571 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:10.805418   12571 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:10.825855   12571 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:10.825967   12571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:10.853047   12571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:10.956904   12571 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:10.956997   12571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:10.992022   12571 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:10.992040   12571 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:10.992045   12571 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:10.992049   12571 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:10.992052   12571 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:10.992056   12571 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:10.992059   12571 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:10.992063   12571 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:10.992066   12571 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:10.992075   12571 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:10.992078   12571 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:10.992082   12571 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:10.992085   12571 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:10.992088   12571 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:10.992094   12571 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:10.992103   12571 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:10.992107   12571 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:10.992112   12571 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:10.992115   12571 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:10.992118   12571 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:10.992122   12571 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:10.992125   12571 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:10.992128   12571 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:10.992131   12571 cri.go:89] found id: ""
	I1008 21:55:10.992182   12571 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:11.012840   12571 out.go:203] 
	W1008 21:55:11.016619   12571 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:11.016710   12571 out.go:285] * 
	* 
	W1008 21:55:11.021164   12571 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:11.024728   12571 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1008 21:55:01.325890    4286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1008 21:55:01.331807    4286 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1008 21:55:01.331832    4286 kapi.go:107] duration metric: took 8.556877ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.566421ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-961288 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-961288 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [25249cda-4e2b-47d0-a418-36e9431655ca] Pending
helpers_test.go:352: "task-pv-pod" [25249cda-4e2b-47d0-a418-36e9431655ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [25249cda-4e2b-47d0-a418-36e9431655ca] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004279536s
addons_test.go:572: (dbg) Run:  kubectl --context addons-961288 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-961288 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-961288 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-961288 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-961288 delete pod task-pv-pod: (1.119048872s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-961288 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-961288 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-961288 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7cf5a056-9ee8-4a3b-97b4-93603919dbff] Pending
helpers_test.go:352: "task-pv-pod-restore" [7cf5a056-9ee8-4a3b-97b4-93603919dbff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7cf5a056-9ee8-4a3b-97b4-93603919dbff] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003388388s
addons_test.go:614: (dbg) Run:  kubectl --context addons-961288 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-961288 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-961288 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (270.685233ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:48.709724   13502 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:48.709952   13502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:48.709982   13502 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:48.710002   13502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:48.710327   13502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:48.710646   13502 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:48.711063   13502 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:48.711101   13502 addons.go:606] checking whether the cluster is paused
	I1008 21:55:48.711249   13502 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:48.711285   13502 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:48.711807   13502 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:48.728867   13502 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:48.728918   13502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:48.749170   13502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:48.852143   13502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:48.852244   13502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:48.883464   13502 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:48.883486   13502 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:48.883492   13502 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:48.883501   13502 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:48.883505   13502 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:48.883509   13502 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:48.883513   13502 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:48.883517   13502 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:48.883520   13502 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:48.883526   13502 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:48.883531   13502 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:48.883535   13502 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:48.883538   13502 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:48.883541   13502 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:48.883544   13502 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:48.883549   13502 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:48.883553   13502 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:48.883557   13502 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:48.883560   13502 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:48.883563   13502 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:48.883567   13502 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:48.883575   13502 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:48.883578   13502 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:48.883582   13502 cri.go:89] found id: ""
	I1008 21:55:48.883639   13502 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:48.898176   13502 out.go:203] 
	W1008 21:55:48.901036   13502 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:48.901079   13502 out.go:285] * 
	* 
	W1008 21:55:48.905393   13502 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:48.908289   13502 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (289.538162ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:48.968093   13545 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:48.968402   13545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:48.968435   13545 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:48.968456   13545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:48.968780   13545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:48.969397   13545 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:48.969864   13545 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:48.969902   13545 addons.go:606] checking whether the cluster is paused
	I1008 21:55:48.970047   13545 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:48.970082   13545 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:48.970591   13545 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:48.992501   13545 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:48.992551   13545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:49.018086   13545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:49.132393   13545 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:49.132490   13545 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:49.166379   13545 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:49.166401   13545 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:49.166406   13545 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:49.166411   13545 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:49.166414   13545 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:49.166418   13545 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:49.166422   13545 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:49.166425   13545 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:49.166429   13545 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:49.166435   13545 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:49.166438   13545 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:49.166441   13545 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:49.166444   13545 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:49.166455   13545 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:49.166461   13545 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:49.166472   13545 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:49.166483   13545 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:49.166488   13545 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:49.166491   13545 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:49.166494   13545 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:49.166499   13545 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:49.166502   13545 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:49.166505   13545 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:49.166508   13545 cri.go:89] found id: ""
	I1008 21:55:49.166557   13545 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:49.184538   13545 out.go:203] 
	W1008 21:55:49.189603   13545 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:49.189668   13545 out.go:285] * 
	* 
	W1008 21:55:49.194023   13545 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:49.197482   13545 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (47.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-961288 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-961288 --alsologtostderr -v=1: exit status 11 (334.092919ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:00.946507   11832 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:00.946749   11832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:00.946757   11832 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:00.946761   11832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:00.947775   11832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:00.948087   11832 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:00.948442   11832 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:00.948451   11832 addons.go:606] checking whether the cluster is paused
	I1008 21:55:00.948647   11832 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:00.948666   11832 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:00.949150   11832 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:00.972721   11832 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:00.972776   11832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:00.996465   11832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:01.105142   11832 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:01.105243   11832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:01.157518   11832 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:01.157539   11832 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:01.157543   11832 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:01.157547   11832 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:01.157550   11832 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:01.157553   11832 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:01.157557   11832 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:01.157560   11832 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:01.157563   11832 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:01.157571   11832 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:01.157575   11832 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:01.157578   11832 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:01.157581   11832 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:01.157585   11832 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:01.157588   11832 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:01.157597   11832 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:01.157603   11832 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:01.157608   11832 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:01.157611   11832 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:01.157614   11832 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:01.157619   11832 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:01.157622   11832 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:01.157625   11832 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:01.157685   11832 cri.go:89] found id: ""
	I1008 21:55:01.157735   11832 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:01.174943   11832 out.go:203] 
	W1008 21:55:01.177766   11832 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:01.177840   11832 out.go:285] * 
	* 
	W1008 21:55:01.182618   11832 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:01.185569   11832 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-961288 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-961288
helpers_test.go:243: (dbg) docker inspect addons-961288:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be",
	        "Created": "2025-10-08T21:52:02.301949344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T21:52:02.363734503Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/hostname",
	        "HostsPath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/hosts",
	        "LogPath": "/var/lib/docker/containers/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be-json.log",
	        "Name": "/addons-961288",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-961288:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-961288",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be",
	                "LowerDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/113f949d6358e5bb1dad460c4616a70c68b0923fd3b93a46c9f2bf6ee84244d2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-961288",
	                "Source": "/var/lib/docker/volumes/addons-961288/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-961288",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-961288",
	                "name.minikube.sigs.k8s.io": "addons-961288",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3cdab071a35cfac65641a7acaae834bd541793bf285d0997896ec3452aa1c585",
	            "SandboxKey": "/var/run/docker/netns/3cdab071a35c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-961288": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:59:00:e0:57:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f0bdf34367215c4acf2890eaad3f999c0ad12a34fb55be42e954c6184bdd2e9",
	                    "EndpointID": "1f1a2468cb2bbae3bad169dc7c81d4d6e0c375f16a39fd99b28b528ea741095d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-961288",
	                        "d45eb870dafc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-961288 -n addons-961288
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-961288 logs -n 25: (1.615840841s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-117299 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-117299   │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │ 08 Oct 25 21:50 UTC │
	│ delete  │ -p download-only-117299                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-117299   │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │ 08 Oct 25 21:50 UTC │
	│ start   │ -o=json --download-only -p download-only-473331 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-473331   │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ delete  │ -p download-only-473331                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-473331   │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ delete  │ -p download-only-117299                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-117299   │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ delete  │ -p download-only-473331                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-473331   │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ start   │ --download-only -p download-docker-889641 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-889641 │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │                     │
	│ delete  │ -p download-docker-889641                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-889641 │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ start   │ --download-only -p binary-mirror-098672 --alsologtostderr --binary-mirror http://127.0.0.1:36433 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-098672   │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │                     │
	│ delete  │ -p binary-mirror-098672                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-098672   │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:51 UTC │
	│ addons  │ enable dashboard -p addons-961288                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │                     │
	│ addons  │ disable dashboard -p addons-961288                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │                     │
	│ start   │ -p addons-961288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:51 UTC │ 08 Oct 25 21:54 UTC │
	│ addons  │ addons-961288 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ addons  │ addons-961288 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ addons  │ addons-961288 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ addons  │ addons-961288 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ ip      │ addons-961288 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │ 08 Oct 25 21:54 UTC │
	│ addons  │ addons-961288 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:54 UTC │                     │
	│ ssh     │ addons-961288 ssh cat /opt/local-path-provisioner/pvc-8e4ef856-8168-49ac-bec5-fd30ac333963_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │ 08 Oct 25 21:55 UTC │
	│ addons  │ addons-961288 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ enable headlamp -p addons-961288 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	│ addons  │ addons-961288 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-961288          │ jenkins │ v1.37.0 │ 08 Oct 25 21:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 21:51:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 21:51:36.292051    5049 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:51:36.292261    5049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:51:36.292276    5049 out.go:374] Setting ErrFile to fd 2...
	I1008 21:51:36.292282    5049 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:51:36.292581    5049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:51:36.293078    5049 out.go:368] Setting JSON to false
	I1008 21:51:36.293921    5049 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2047,"bootTime":1759958250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 21:51:36.293991    5049 start.go:141] virtualization:  
	I1008 21:51:36.297380    5049 out.go:179] * [addons-961288] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 21:51:36.301138    5049 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 21:51:36.301173    5049 notify.go:220] Checking for updates...
	I1008 21:51:36.304164    5049 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 21:51:36.307369    5049 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 21:51:36.310175    5049 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 21:51:36.313040    5049 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 21:51:36.315989    5049 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 21:51:36.319059    5049 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 21:51:36.345759    5049 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 21:51:36.345957    5049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:51:36.414608    5049 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-08 21:51:36.405472994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:51:36.414717    5049 docker.go:318] overlay module found
	I1008 21:51:36.417818    5049 out.go:179] * Using the docker driver based on user configuration
	I1008 21:51:36.420759    5049 start.go:305] selected driver: docker
	I1008 21:51:36.420797    5049 start.go:925] validating driver "docker" against <nil>
	I1008 21:51:36.420813    5049 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 21:51:36.421561    5049 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:51:36.475047    5049 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-08 21:51:36.466315739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:51:36.475215    5049 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 21:51:36.475442    5049 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 21:51:36.478472    5049 out.go:179] * Using Docker driver with root privileges
	I1008 21:51:36.481364    5049 cni.go:84] Creating CNI manager for ""
	I1008 21:51:36.481438    5049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:51:36.481449    5049 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 21:51:36.481527    5049 start.go:349] cluster config:
	{Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1008 21:51:36.484733    5049 out.go:179] * Starting "addons-961288" primary control-plane node in "addons-961288" cluster
	I1008 21:51:36.487504    5049 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 21:51:36.490406    5049 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 21:51:36.493297    5049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:51:36.493356    5049 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 21:51:36.493371    5049 cache.go:58] Caching tarball of preloaded images
	I1008 21:51:36.493390    5049 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 21:51:36.493458    5049 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 21:51:36.493467    5049 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 21:51:36.493823    5049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/config.json ...
	I1008 21:51:36.493891    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/config.json: {Name:mk705f89e8e849311d188624c5dd93d0bb86e461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:51:36.509420    5049 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 21:51:36.509573    5049 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1008 21:51:36.509597    5049 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1008 21:51:36.509602    5049 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1008 21:51:36.509610    5049 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1008 21:51:36.509615    5049 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1008 21:51:54.772520    5049 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1008 21:51:54.772571    5049 cache.go:232] Successfully downloaded all kic artifacts
	I1008 21:51:54.772600    5049 start.go:360] acquireMachinesLock for addons-961288: {Name:mkdb9a642333218a6563588e9d25960d2f4ebc46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 21:51:54.772733    5049 start.go:364] duration metric: took 111.303µs to acquireMachinesLock for "addons-961288"
	I1008 21:51:54.772766    5049 start.go:93] Provisioning new machine with config: &{Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 21:51:54.772853    5049 start.go:125] createHost starting for "" (driver="docker")
	I1008 21:51:54.776366    5049 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1008 21:51:54.776658    5049 start.go:159] libmachine.API.Create for "addons-961288" (driver="docker")
	I1008 21:51:54.776709    5049 client.go:168] LocalClient.Create starting
	I1008 21:51:54.776849    5049 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 21:51:54.903661    5049 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 21:51:55.094724    5049 cli_runner.go:164] Run: docker network inspect addons-961288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 21:51:55.111344    5049 cli_runner.go:211] docker network inspect addons-961288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 21:51:55.111436    5049 network_create.go:284] running [docker network inspect addons-961288] to gather additional debugging logs...
	I1008 21:51:55.111458    5049 cli_runner.go:164] Run: docker network inspect addons-961288
	W1008 21:51:55.128544    5049 cli_runner.go:211] docker network inspect addons-961288 returned with exit code 1
	I1008 21:51:55.128576    5049 network_create.go:287] error running [docker network inspect addons-961288]: docker network inspect addons-961288: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-961288 not found
	I1008 21:51:55.128602    5049 network_create.go:289] output of [docker network inspect addons-961288]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-961288 not found
	
	** /stderr **
	I1008 21:51:55.128710    5049 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 21:51:55.145015    5049 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b1130}
	I1008 21:51:55.145054    5049 network_create.go:124] attempt to create docker network addons-961288 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1008 21:51:55.145107    5049 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-961288 addons-961288
	I1008 21:51:55.197785    5049 network_create.go:108] docker network addons-961288 192.168.49.0/24 created
	I1008 21:51:55.197820    5049 kic.go:121] calculated static IP "192.168.49.2" for the "addons-961288" container
	I1008 21:51:55.197887    5049 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 21:51:55.213469    5049 cli_runner.go:164] Run: docker volume create addons-961288 --label name.minikube.sigs.k8s.io=addons-961288 --label created_by.minikube.sigs.k8s.io=true
	I1008 21:51:55.232226    5049 oci.go:103] Successfully created a docker volume addons-961288
	I1008 21:51:55.232319    5049 cli_runner.go:164] Run: docker run --rm --name addons-961288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961288 --entrypoint /usr/bin/test -v addons-961288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 21:51:57.684901    5049 cli_runner.go:217] Completed: docker run --rm --name addons-961288-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961288 --entrypoint /usr/bin/test -v addons-961288:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (2.452542691s)
	I1008 21:51:57.684931    5049 oci.go:107] Successfully prepared a docker volume addons-961288
	I1008 21:51:57.684966    5049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:51:57.684984    5049 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 21:51:57.685058    5049 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-961288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 21:52:02.228999    5049 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-961288:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.543903536s)
	I1008 21:52:02.229034    5049 kic.go:203] duration metric: took 4.544046215s to extract preloaded images to volume ...
	W1008 21:52:02.229188    5049 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 21:52:02.229316    5049 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 21:52:02.286655    5049 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-961288 --name addons-961288 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-961288 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-961288 --network addons-961288 --ip 192.168.49.2 --volume addons-961288:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 21:52:02.618199    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Running}}
	I1008 21:52:02.638020    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:02.660886    5049 cli_runner.go:164] Run: docker exec addons-961288 stat /var/lib/dpkg/alternatives/iptables
	I1008 21:52:02.711872    5049 oci.go:144] the created container "addons-961288" has a running status.
	I1008 21:52:02.711899    5049 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa...
	I1008 21:52:02.970470    5049 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 21:52:03.001850    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:03.025275    5049 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 21:52:03.025301    5049 kic_runner.go:114] Args: [docker exec --privileged addons-961288 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 21:52:03.106051    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:03.128498    5049 machine.go:93] provisionDockerMachine start ...
	I1008 21:52:03.128584    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:03.155603    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:03.155926    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:03.155935    5049 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 21:52:03.156535    5049 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33340->127.0.0.1:32768: read: connection reset by peer
	I1008 21:52:06.305066    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-961288
	
	I1008 21:52:06.305087    5049 ubuntu.go:182] provisioning hostname "addons-961288"
	I1008 21:52:06.305147    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:06.322911    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:06.323253    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:06.323272    5049 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-961288 && echo "addons-961288" | sudo tee /etc/hostname
	I1008 21:52:06.474679    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-961288
	
	I1008 21:52:06.474753    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:06.491657    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:06.491964    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:06.491986    5049 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-961288' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-961288/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-961288' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 21:52:06.637802    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 21:52:06.637840    5049 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 21:52:06.637863    5049 ubuntu.go:190] setting up certificates
	I1008 21:52:06.637874    5049 provision.go:84] configureAuth start
	I1008 21:52:06.637943    5049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961288
	I1008 21:52:06.655395    5049 provision.go:143] copyHostCerts
	I1008 21:52:06.655489    5049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 21:52:06.655620    5049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 21:52:06.655683    5049 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 21:52:06.655740    5049 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.addons-961288 san=[127.0.0.1 192.168.49.2 addons-961288 localhost minikube]
	I1008 21:52:06.921476    5049 provision.go:177] copyRemoteCerts
	I1008 21:52:06.921544    5049 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 21:52:06.921587    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:06.938536    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.041222    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 21:52:07.058842    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1008 21:52:07.075595    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 21:52:07.092175    5049 provision.go:87] duration metric: took 454.274831ms to configureAuth
	I1008 21:52:07.092244    5049 ubuntu.go:206] setting minikube options for container-runtime
	I1008 21:52:07.092448    5049 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:52:07.092562    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.110254    5049 main.go:141] libmachine: Using SSH client type: native
	I1008 21:52:07.110548    5049 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1008 21:52:07.110566    5049 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 21:52:07.359291    5049 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 21:52:07.359330    5049 machine.go:96] duration metric: took 4.230798217s to provisionDockerMachine
	I1008 21:52:07.359339    5049 client.go:171] duration metric: took 12.582618409s to LocalClient.Create
	I1008 21:52:07.359352    5049 start.go:167] duration metric: took 12.582694094s to libmachine.API.Create "addons-961288"
	I1008 21:52:07.359363    5049 start.go:293] postStartSetup for "addons-961288" (driver="docker")
	I1008 21:52:07.359373    5049 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 21:52:07.359449    5049 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 21:52:07.359494    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.377809    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.481917    5049 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 21:52:07.485093    5049 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 21:52:07.485124    5049 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 21:52:07.485135    5049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 21:52:07.485200    5049 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 21:52:07.485231    5049 start.go:296] duration metric: took 125.861643ms for postStartSetup
	I1008 21:52:07.485539    5049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961288
	I1008 21:52:07.502685    5049 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/config.json ...
	I1008 21:52:07.502985    5049 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 21:52:07.503040    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.520700    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.618994    5049 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 21:52:07.624275    5049 start.go:128] duration metric: took 12.851405954s to createHost
	I1008 21:52:07.624299    5049 start.go:83] releasing machines lock for "addons-961288", held for 12.851552703s
	I1008 21:52:07.624390    5049 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-961288
	I1008 21:52:07.641455    5049 ssh_runner.go:195] Run: cat /version.json
	I1008 21:52:07.641505    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.641511    5049 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 21:52:07.641572    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:07.660354    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.670530    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:07.842903    5049 ssh_runner.go:195] Run: systemctl --version
	I1008 21:52:07.849026    5049 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 21:52:07.884099    5049 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 21:52:07.888213    5049 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 21:52:07.888336    5049 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 21:52:07.916413    5049 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 21:52:07.916448    5049 start.go:495] detecting cgroup driver to use...
	I1008 21:52:07.916480    5049 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 21:52:07.916547    5049 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 21:52:07.933357    5049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 21:52:07.945513    5049 docker.go:218] disabling cri-docker service (if available) ...
	I1008 21:52:07.945575    5049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 21:52:07.963459    5049 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 21:52:07.982140    5049 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 21:52:08.106168    5049 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 21:52:08.228457    5049 docker.go:234] disabling docker service ...
	I1008 21:52:08.228522    5049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 21:52:08.249141    5049 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 21:52:08.262379    5049 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 21:52:08.381443    5049 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 21:52:08.499889    5049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 21:52:08.512602    5049 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 21:52:08.527273    5049 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 21:52:08.527339    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.536228    5049 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 21:52:08.536294    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.544989    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.553685    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.562202    5049 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 21:52:08.570825    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.579667    5049 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.592849    5049 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 21:52:08.601648    5049 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 21:52:08.609236    5049 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1008 21:52:08.609318    5049 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1008 21:52:08.622755    5049 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 21:52:08.630516    5049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 21:52:08.742422    5049 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 21:52:08.867707    5049 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 21:52:08.867825    5049 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 21:52:08.871512    5049 start.go:563] Will wait 60s for crictl version
	I1008 21:52:08.871600    5049 ssh_runner.go:195] Run: which crictl
	I1008 21:52:08.875359    5049 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 21:52:08.904382    5049 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 21:52:08.904578    5049 ssh_runner.go:195] Run: crio --version
	I1008 21:52:08.937222    5049 ssh_runner.go:195] Run: crio --version
	I1008 21:52:08.969443    5049 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 21:52:08.972274    5049 cli_runner.go:164] Run: docker network inspect addons-961288 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 21:52:08.987381    5049 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 21:52:08.991330    5049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 21:52:09.001167    5049 kubeadm.go:883] updating cluster {Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 21:52:09.001283    5049 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:52:09.001373    5049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 21:52:09.038423    5049 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 21:52:09.038447    5049 crio.go:433] Images already preloaded, skipping extraction
	I1008 21:52:09.038502    5049 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 21:52:09.067265    5049 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 21:52:09.067286    5049 cache_images.go:85] Images are preloaded, skipping loading
	I1008 21:52:09.067295    5049 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1008 21:52:09.067385    5049 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-961288 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 21:52:09.067474    5049 ssh_runner.go:195] Run: crio config
	I1008 21:52:09.120765    5049 cni.go:84] Creating CNI manager for ""
	I1008 21:52:09.120791    5049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:52:09.120839    5049 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 21:52:09.120868    5049 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-961288 NodeName:addons-961288 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 21:52:09.121050    5049 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-961288"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 21:52:09.121168    5049 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 21:52:09.128885    5049 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 21:52:09.128977    5049 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 21:52:09.136807    5049 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1008 21:52:09.149457    5049 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 21:52:09.164379    5049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1008 21:52:09.177389    5049 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 21:52:09.181027    5049 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 21:52:09.190758    5049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 21:52:09.294419    5049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 21:52:09.314125    5049 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288 for IP: 192.168.49.2
	I1008 21:52:09.314188    5049 certs.go:195] generating shared ca certs ...
	I1008 21:52:09.314219    5049 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:09.314397    5049 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 21:52:09.426815    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt ...
	I1008 21:52:09.426845    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt: {Name:mka3917889a100f4c1dcc59b106b117a87bc8e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:09.427033    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key ...
	I1008 21:52:09.427046    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key: {Name:mkb00cad5a1a442be62fc42dd2dd6615aa701bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:09.427138    5049 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 21:52:10.005204    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt ...
	I1008 21:52:10.005240    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt: {Name:mke7237f65caf5c4ac41b833cf33815e54380d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:10.005434    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key ...
	I1008 21:52:10.005443    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key: {Name:mke0fe2ca068371875b4dd6e540113cc51c1c087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:10.005511    5049 certs.go:257] generating profile certs ...
	I1008 21:52:10.005584    5049 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.key
	I1008 21:52:10.005600    5049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt with IP's: []
	I1008 21:52:11.414091    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt ...
	I1008 21:52:11.414122    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: {Name:mk21ad367910f0f6fa334a16944294025b7939aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.414312    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.key ...
	I1008 21:52:11.414324    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.key: {Name:mk1a8dd0fed5e7d7a3722edb5e3a8baf9cf375a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.414407    5049 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217
	I1008 21:52:11.414428    5049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1008 21:52:11.597366    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217 ...
	I1008 21:52:11.597389    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217: {Name:mk553d0c64138e528dbe64b1cb0d06d3de9b99e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.597537    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217 ...
	I1008 21:52:11.597546    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217: {Name:mke7aee446fe9d079fd0797289ac5b1e60fe3660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.597615    5049 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt.b37b7217 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt
	I1008 21:52:11.597725    5049 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key.b37b7217 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key
	I1008 21:52:11.597775    5049 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key
	I1008 21:52:11.597791    5049 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt with IP's: []
	I1008 21:52:11.921739    5049 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt ...
	I1008 21:52:11.921772    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt: {Name:mk6fa9bf513c72e6bcfbc7e02d11980100d655d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.921958    5049 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key ...
	I1008 21:52:11.921970    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key: {Name:mke7f79c1430c2088347633b162587d54537eea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:11.922171    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 21:52:11.922214    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 21:52:11.922245    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 21:52:11.922279    5049 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 21:52:11.922895    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 21:52:11.941146    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 21:52:11.959233    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 21:52:11.976799    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 21:52:11.994315    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1008 21:52:12.014028    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 21:52:12.032846    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 21:52:12.051387    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 21:52:12.069246    5049 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 21:52:12.088062    5049 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 21:52:12.100796    5049 ssh_runner.go:195] Run: openssl version
	I1008 21:52:12.107357    5049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 21:52:12.115848    5049 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 21:52:12.120760    5049 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 21:52:12.120875    5049 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 21:52:12.161476    5049 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 21:52:12.169723    5049 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 21:52:12.172960    5049 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 21:52:12.173012    5049 kubeadm.go:400] StartCluster: {Name:addons-961288 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-961288 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 21:52:12.173084    5049 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:52:12.173137    5049 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:52:12.199482    5049 cri.go:89] found id: ""
	I1008 21:52:12.199574    5049 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 21:52:12.207402    5049 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 21:52:12.215073    5049 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 21:52:12.215180    5049 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 21:52:12.223286    5049 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 21:52:12.223306    5049 kubeadm.go:157] found existing configuration files:
	
	I1008 21:52:12.223362    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 21:52:12.230890    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 21:52:12.230953    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 21:52:12.238335    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 21:52:12.246145    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 21:52:12.246208    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 21:52:12.253766    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 21:52:12.261411    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 21:52:12.261530    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 21:52:12.269016    5049 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 21:52:12.276758    5049 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 21:52:12.276888    5049 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 21:52:12.284457    5049 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 21:52:12.327718    5049 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 21:52:12.327945    5049 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 21:52:12.364191    5049 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 21:52:12.364271    5049 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 21:52:12.364315    5049 kubeadm.go:318] OS: Linux
	I1008 21:52:12.364368    5049 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 21:52:12.364423    5049 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 21:52:12.364475    5049 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 21:52:12.364529    5049 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 21:52:12.364583    5049 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 21:52:12.364668    5049 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 21:52:12.364721    5049 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 21:52:12.364776    5049 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 21:52:12.364828    5049 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 21:52:12.438474    5049 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 21:52:12.438594    5049 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 21:52:12.438695    5049 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 21:52:12.450041    5049 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 21:52:12.454426    5049 out.go:252]   - Generating certificates and keys ...
	I1008 21:52:12.454599    5049 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 21:52:12.454685    5049 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 21:52:12.571138    5049 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 21:52:12.955634    5049 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 21:52:13.319000    5049 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 21:52:14.216336    5049 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 21:52:15.412074    5049 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 21:52:15.412361    5049 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-961288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 21:52:16.105987    5049 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 21:52:16.106143    5049 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-961288 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1008 21:52:16.564088    5049 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 21:52:16.948128    5049 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 21:52:17.466038    5049 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 21:52:17.466317    5049 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 21:52:17.824288    5049 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 21:52:19.511346    5049 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 21:52:19.721588    5049 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 21:52:20.666938    5049 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 21:52:20.940338    5049 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 21:52:20.941093    5049 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 21:52:20.943736    5049 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 21:52:20.947177    5049 out.go:252]   - Booting up control plane ...
	I1008 21:52:20.947282    5049 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 21:52:20.947383    5049 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 21:52:20.947454    5049 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 21:52:20.963884    5049 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 21:52:20.964246    5049 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 21:52:20.971727    5049 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 21:52:20.972076    5049 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 21:52:20.972299    5049 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 21:52:21.110140    5049 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 21:52:21.110265    5049 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 21:52:23.105430    5049 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001278731s
	I1008 21:52:23.109213    5049 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 21:52:23.109319    5049 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1008 21:52:23.109417    5049 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 21:52:23.109504    5049 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 21:52:27.045576    5049 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.93571035s
	I1008 21:52:27.450260    5049 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.341065181s
	I1008 21:52:29.113569    5049 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002314528s
	I1008 21:52:29.131119    5049 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 21:52:29.146805    5049 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 21:52:29.160880    5049 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 21:52:29.161099    5049 kubeadm.go:318] [mark-control-plane] Marking the node addons-961288 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 21:52:29.177037    5049 kubeadm.go:318] [bootstrap-token] Using token: s30xba.14zmtly2zm02vci8
	I1008 21:52:29.182153    5049 out.go:252]   - Configuring RBAC rules ...
	I1008 21:52:29.182288    5049 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 21:52:29.184464    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 21:52:29.192621    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 21:52:29.196660    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 21:52:29.202566    5049 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 21:52:29.206577    5049 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 21:52:29.520056    5049 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 21:52:29.949987    5049 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 21:52:30.518664    5049 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 21:52:30.520176    5049 kubeadm.go:318] 
	I1008 21:52:30.520291    5049 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 21:52:30.520315    5049 kubeadm.go:318] 
	I1008 21:52:30.520400    5049 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 21:52:30.520406    5049 kubeadm.go:318] 
	I1008 21:52:30.520432    5049 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 21:52:30.520573    5049 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 21:52:30.520637    5049 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 21:52:30.520643    5049 kubeadm.go:318] 
	I1008 21:52:30.520699    5049 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 21:52:30.520704    5049 kubeadm.go:318] 
	I1008 21:52:30.520754    5049 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 21:52:30.520759    5049 kubeadm.go:318] 
	I1008 21:52:30.520813    5049 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 21:52:30.520891    5049 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 21:52:30.520962    5049 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 21:52:30.520967    5049 kubeadm.go:318] 
	I1008 21:52:30.521077    5049 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 21:52:30.521158    5049 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 21:52:30.521163    5049 kubeadm.go:318] 
	I1008 21:52:30.521251    5049 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s30xba.14zmtly2zm02vci8 \
	I1008 21:52:30.521358    5049 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 21:52:30.521380    5049 kubeadm.go:318] 	--control-plane 
	I1008 21:52:30.521385    5049 kubeadm.go:318] 
	I1008 21:52:30.521477    5049 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 21:52:30.521481    5049 kubeadm.go:318] 
	I1008 21:52:30.521581    5049 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s30xba.14zmtly2zm02vci8 \
	I1008 21:52:30.521712    5049 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 21:52:30.525791    5049 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 21:52:30.526037    5049 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 21:52:30.526146    5049 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 21:52:30.526161    5049 cni.go:84] Creating CNI manager for ""
	I1008 21:52:30.526169    5049 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:52:30.529336    5049 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 21:52:30.532292    5049 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 21:52:30.536409    5049 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 21:52:30.536428    5049 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 21:52:30.549494    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 21:52:30.838851    5049 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 21:52:30.838901    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:30.838965    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-961288 minikube.k8s.io/updated_at=2025_10_08T21_52_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=addons-961288 minikube.k8s.io/primary=true
	I1008 21:52:30.975855    5049 ops.go:34] apiserver oom_adj: -16
	I1008 21:52:30.975968    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:31.476637    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:31.976778    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:32.476850    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:32.976552    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:33.476126    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:33.976975    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:34.476068    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:34.976589    5049 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 21:52:35.074858    5049 kubeadm.go:1113] duration metric: took 4.23600977s to wait for elevateKubeSystemPrivileges
	I1008 21:52:35.074890    5049 kubeadm.go:402] duration metric: took 22.901882517s to StartCluster
	I1008 21:52:35.074907    5049 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:35.075016    5049 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 21:52:35.075450    5049 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 21:52:35.075648    5049 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 21:52:35.075810    5049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 21:52:35.076098    5049 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:52:35.076144    5049 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1008 21:52:35.076226    5049 addons.go:69] Setting yakd=true in profile "addons-961288"
	I1008 21:52:35.076244    5049 addons.go:238] Setting addon yakd=true in "addons-961288"
	I1008 21:52:35.076265    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.076762    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.077086    5049 addons.go:69] Setting inspektor-gadget=true in profile "addons-961288"
	I1008 21:52:35.077106    5049 addons.go:238] Setting addon inspektor-gadget=true in "addons-961288"
	I1008 21:52:35.077136    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.077559    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.077853    5049 addons.go:69] Setting metrics-server=true in profile "addons-961288"
	I1008 21:52:35.077880    5049 addons.go:238] Setting addon metrics-server=true in "addons-961288"
	I1008 21:52:35.077928    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.078334    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.084090    5049 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-961288"
	I1008 21:52:35.084164    5049 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-961288"
	I1008 21:52:35.084222    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.084836    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.087295    5049 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-961288"
	I1008 21:52:35.087342    5049 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-961288"
	I1008 21:52:35.087383    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.087852    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.090808    5049 addons.go:69] Setting cloud-spanner=true in profile "addons-961288"
	I1008 21:52:35.090847    5049 addons.go:238] Setting addon cloud-spanner=true in "addons-961288"
	I1008 21:52:35.090881    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.091442    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.095379    5049 addons.go:69] Setting registry=true in profile "addons-961288"
	I1008 21:52:35.095413    5049 addons.go:238] Setting addon registry=true in "addons-961288"
	I1008 21:52:35.095457    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.096151    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.098825    5049 addons.go:69] Setting registry-creds=true in profile "addons-961288"
	I1008 21:52:35.098895    5049 addons.go:238] Setting addon registry-creds=true in "addons-961288"
	I1008 21:52:35.108110    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.108608    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.099060    5049 addons.go:69] Setting storage-provisioner=true in profile "addons-961288"
	I1008 21:52:35.128247    5049 addons.go:238] Setting addon storage-provisioner=true in "addons-961288"
	I1008 21:52:35.128289    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.128763    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.099072    5049 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-961288"
	I1008 21:52:35.155763    5049 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-961288"
	I1008 21:52:35.156095    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.099079    5049 addons.go:69] Setting volcano=true in profile "addons-961288"
	I1008 21:52:35.174723    5049 addons.go:238] Setting addon volcano=true in "addons-961288"
	I1008 21:52:35.174765    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.175263    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.181593    5049 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1008 21:52:35.184488    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1008 21:52:35.184518    5049 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1008 21:52:35.184582    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.099085    5049 addons.go:69] Setting volumesnapshots=true in profile "addons-961288"
	I1008 21:52:35.189901    5049 addons.go:238] Setting addon volumesnapshots=true in "addons-961288"
	I1008 21:52:35.189942    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.190403    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.104502    5049 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-961288"
	I1008 21:52:35.206069    5049 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-961288"
	I1008 21:52:35.206106    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.206575    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.104518    5049 addons.go:69] Setting default-storageclass=true in profile "addons-961288"
	I1008 21:52:35.225451    5049 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-961288"
	I1008 21:52:35.225806    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.229607    5049 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1008 21:52:35.233772    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1008 21:52:35.233847    5049 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1008 21:52:35.233967    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.104528    5049 addons.go:69] Setting gcp-auth=true in profile "addons-961288"
	I1008 21:52:35.241735    5049 mustload.go:65] Loading cluster: addons-961288
	I1008 21:52:35.241931    5049 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:52:35.242189    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.270141    5049 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1008 21:52:35.104535    5049 addons.go:69] Setting ingress=true in profile "addons-961288"
	I1008 21:52:35.272180    5049 addons.go:238] Setting addon ingress=true in "addons-961288"
	I1008 21:52:35.272225    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.272674    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.272898    5049 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1008 21:52:35.272942    5049 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1008 21:52:35.273000    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.104542    5049 addons.go:69] Setting ingress-dns=true in profile "addons-961288"
	I1008 21:52:35.296841    5049 addons.go:238] Setting addon ingress-dns=true in "addons-961288"
	I1008 21:52:35.296890    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.297351    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.105343    5049 out.go:179] * Verifying Kubernetes components...
	I1008 21:52:35.353039    5049 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1008 21:52:35.381791    5049 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1008 21:52:35.382381    5049 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 21:52:35.386569    5049 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-961288"
	I1008 21:52:35.386660    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.387128    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.392743    5049 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 21:52:35.392821    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1008 21:52:35.392914    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.416842    5049 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1008 21:52:35.420314    5049 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1008 21:52:35.423469    5049 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1008 21:52:35.423541    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1008 21:52:35.423593    5049 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 21:52:35.423648    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.433680    5049 out.go:179]   - Using image docker.io/registry:3.0.0
	I1008 21:52:35.436027    5049 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1008 21:52:35.436051    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1008 21:52:35.436144    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.449715    5049 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1008 21:52:35.423469    5049 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1008 21:52:35.452892    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1008 21:52:35.453105    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	W1008 21:52:35.456309    5049 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1008 21:52:35.456622    5049 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1008 21:52:35.456645    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1008 21:52:35.456714    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.452805    5049 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 21:52:35.501520    5049 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 21:52:35.501595    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 21:52:35.501706    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.515128    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.552628    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1008 21:52:35.555372    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1008 21:52:35.555407    5049 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1008 21:52:35.555488    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.572794    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.573696    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1008 21:52:35.576660    5049 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1008 21:52:35.578002    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.589687    5049 out.go:179]   - Using image docker.io/busybox:stable
	I1008 21:52:35.589843    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1008 21:52:35.590770    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.594329    5049 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 21:52:35.594352    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1008 21:52:35.594422    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.598841    5049 addons.go:238] Setting addon default-storageclass=true in "addons-961288"
	I1008 21:52:35.598879    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:35.599279    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:35.621324    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 21:52:35.627537    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1008 21:52:35.628140    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.632713    5049 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1008 21:52:35.635764    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 21:52:35.636902    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1008 21:52:35.636986    5049 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 21:52:35.640687    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1008 21:52:35.640776    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.644664    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1008 21:52:35.647434    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1008 21:52:35.647688    5049 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 21:52:35.647703    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1008 21:52:35.647766    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.657452    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1008 21:52:35.661764    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1008 21:52:35.667920    5049 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1008 21:52:35.673589    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1008 21:52:35.673615    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1008 21:52:35.673795    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.678932    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.711273    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.718074    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.738872    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.761856    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.777377    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.795557    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.800182    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.806541    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	W1008 21:52:35.810325    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:35.810365    5049 retry.go:31] will retry after 269.6754ms: ssh: handshake failed: EOF
	W1008 21:52:35.810492    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:35.810499    5049 retry.go:31] will retry after 295.399508ms: ssh: handshake failed: EOF
	W1008 21:52:35.812493    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:35.812517    5049 retry.go:31] will retry after 267.839688ms: ssh: handshake failed: EOF
	I1008 21:52:35.829281    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:35.829717    5049 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 21:52:35.829728    5049 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 21:52:35.829775    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:35.864548    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	W1008 21:52:36.083042    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:36.083096    5049 retry.go:31] will retry after 195.013468ms: ssh: handshake failed: EOF
	W1008 21:52:36.112387    5049 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1008 21:52:36.112417    5049 retry.go:31] will retry after 489.914771ms: ssh: handshake failed: EOF
	I1008 21:52:36.285169    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1008 21:52:36.285260    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1008 21:52:36.311522    5049 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1008 21:52:36.311599    5049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1008 21:52:36.392346    5049 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1008 21:52:36.392425    5049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1008 21:52:36.395176    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1008 21:52:36.395258    5049 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1008 21:52:36.403039    5049 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1008 21:52:36.403076    5049 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1008 21:52:36.413026    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1008 21:52:36.472025    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1008 21:52:36.472047    5049 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1008 21:52:36.519325    5049 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1008 21:52:36.519347    5049 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1008 21:52:36.547045    5049 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:36.547115    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1008 21:52:36.552153    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 21:52:36.555238    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1008 21:52:36.555298    5049 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1008 21:52:36.562439    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1008 21:52:36.583477    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1008 21:52:36.592780    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1008 21:52:36.608955    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 21:52:36.631667    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:36.635468    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1008 21:52:36.651327    5049 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1008 21:52:36.651349    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1008 21:52:36.663968    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1008 21:52:36.663991    5049 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1008 21:52:36.667221    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1008 21:52:36.667239    5049 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1008 21:52:36.668302    5049 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 21:52:36.668333    5049 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1008 21:52:36.803501    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1008 21:52:36.845351    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1008 21:52:36.847280    5049 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1008 21:52:36.847303    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1008 21:52:36.853347    5049 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 21:52:36.853371    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1008 21:52:36.874086    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1008 21:52:36.973316    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1008 21:52:36.973342    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1008 21:52:37.089120    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 21:52:37.098052    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1008 21:52:37.225978    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1008 21:52:37.226006    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1008 21:52:37.256545    5049 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.832917562s)
	I1008 21:52:37.256611    5049 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 21:52:37.256670    5049 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.874273368s)
	I1008 21:52:37.256686    5049 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1008 21:52:37.363692    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1008 21:52:37.363727    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1008 21:52:37.448472    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1008 21:52:37.607451    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1008 21:52:37.607477    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1008 21:52:37.760572    5049 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-961288" context rescaled to 1 replicas
	I1008 21:52:37.827786    5049 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1008 21:52:37.827813    5049 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1008 21:52:37.971330    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1008 21:52:37.971356    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1008 21:52:38.127831    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1008 21:52:38.127855    5049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1008 21:52:38.349946    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1008 21:52:38.349970    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1008 21:52:38.595718    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1008 21:52:38.595742    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1008 21:52:38.808780    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.395671572s)
	I1008 21:52:38.808838    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.256624493s)
	I1008 21:52:38.809016    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.246500372s)
	I1008 21:52:38.809066    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.225519994s)
	I1008 21:52:38.809340    5049 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 21:52:38.809357    5049 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1008 21:52:39.045901    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1008 21:52:39.998133    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.405269611s)
	I1008 21:52:39.998217    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.389238111s)
	I1008 21:52:40.207686    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.575984142s)
	W1008 21:52:40.207722    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:40.207740    5049 retry.go:31] will retry after 340.361653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:40.207776    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.57228674s)
	I1008 21:52:40.207829    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.404303179s)
	I1008 21:52:40.207844    5049 addons.go:479] Verifying addon registry=true in "addons-961288"
	I1008 21:52:40.212798    5049 out.go:179] * Verifying registry addon...
	I1008 21:52:40.216444    5049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1008 21:52:40.229377    5049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 21:52:40.229402    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:40.462339    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.616953085s)
	I1008 21:52:40.462627    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.588513088s)
	I1008 21:52:40.462667    5049 addons.go:479] Verifying addon metrics-server=true in "addons-961288"
	I1008 21:52:40.548479    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:40.720406    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:41.238613    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:41.284727    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.19556687s)
	W1008 21:52:41.284810    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 21:52:41.284844    5049 retry.go:31] will retry after 341.860996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1008 21:52:41.284922    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.186843731s)
	I1008 21:52:41.285132    5049 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.028502219s)
	I1008 21:52:41.285914    5049 node_ready.go:35] waiting up to 6m0s for node "addons-961288" to be "Ready" ...
	I1008 21:52:41.288175    5049 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-961288 service yakd-dashboard -n yakd-dashboard
	
	I1008 21:52:41.627222    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1008 21:52:41.740569    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:41.863936    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.41543177s)
	I1008 21:52:41.864121    5049 addons.go:479] Verifying addon ingress=true in "addons-961288"
	I1008 21:52:41.867296    5049 out.go:179] * Verifying ingress addon...
	I1008 21:52:41.870894    5049 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1008 21:52:41.875846    5049 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1008 21:52:41.875909    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:42.233602    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:42.295012    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.249062771s)
	I1008 21:52:42.295101    5049 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-961288"
	I1008 21:52:42.298227    5049 out.go:179] * Verifying csi-hostpath-driver addon...
	I1008 21:52:42.301925    5049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1008 21:52:42.312376    5049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 21:52:42.312448    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:42.375423    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:42.380581    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.832023069s)
	W1008 21:52:42.380687    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:42.380723    5049 retry.go:31] will retry after 466.340843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:42.719989    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:42.735692    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.108376541s)
	I1008 21:52:42.820836    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:42.848163    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:42.874611    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:43.202477    5049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1008 21:52:43.202658    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:43.226336    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:43.238276    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	W1008 21:52:43.290818    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:43.307503    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:43.376375    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:43.381659    5049 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1008 21:52:43.403319    5049 addons.go:238] Setting addon gcp-auth=true in "addons-961288"
	I1008 21:52:43.403370    5049 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:52:43.403893    5049 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:52:43.427040    5049 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1008 21:52:43.427111    5049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:52:43.453697    5049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:52:43.719615    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 21:52:43.737158    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:43.737237    5049 retry.go:31] will retry after 758.216086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:43.740787    5049 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1008 21:52:43.743768    5049 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1008 21:52:43.746645    5049 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1008 21:52:43.746677    5049 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1008 21:52:43.760068    5049 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1008 21:52:43.760088    5049 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1008 21:52:43.773797    5049 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 21:52:43.773823    5049 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1008 21:52:43.789757    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1008 21:52:43.806058    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:43.874011    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:44.223412    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:44.276708    5049 addons.go:479] Verifying addon gcp-auth=true in "addons-961288"
	I1008 21:52:44.280358    5049 out.go:179] * Verifying gcp-auth addon...
	I1008 21:52:44.283966    5049 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1008 21:52:44.293910    5049 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1008 21:52:44.293986    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:44.393579    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:44.393833    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:44.496177    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:44.719641    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:44.788919    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:44.805791    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:44.874446    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:45.220939    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:45.290168    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:45.294466    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:45.305957    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:45.375281    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:52:45.407783    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:45.407824    5049 retry.go:31] will retry after 795.029046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:45.719748    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:45.789158    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:45.805179    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:45.874074    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:46.203501    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:46.220217    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:46.288600    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:46.306028    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:46.374755    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:46.719367    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:46.789465    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:46.806044    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:46.874519    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:52:46.999490    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:46.999564    5049 retry.go:31] will retry after 1.486496131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:47.219393    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:47.288415    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:47.306222    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:47.373882    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:47.720256    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:47.788224    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:47.789078    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:47.805173    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:47.873728    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:48.219722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:48.287507    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:48.305209    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:48.374636    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:48.486811    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:48.720187    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:48.787222    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:48.805138    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:48.874131    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:49.220575    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 21:52:49.286921    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:49.286951    5049 retry.go:31] will retry after 2.262041796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:49.288365    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:49.305480    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:49.374367    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:49.719319    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:49.787333    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:49.805956    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:49.873829    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:50.220325    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:50.287378    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:50.289004    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:50.304574    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:50.375027    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:50.719800    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:50.787549    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:50.805419    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:50.874415    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:51.219090    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:51.286543    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:51.304936    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:51.374016    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:51.549417    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:51.719554    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:51.787843    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:51.805371    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:51.874304    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:52.220641    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:52.287454    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:52.289092    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:52.304578    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1008 21:52:52.377219    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:52.377299    5049 retry.go:31] will retry after 3.926801977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:52.390758    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:52.719643    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:52.788031    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:52.805362    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:52.874344    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:53.219270    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:53.287367    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:53.304831    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:53.374515    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:53.719261    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:53.788246    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:53.805555    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:53.874439    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:54.219888    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:54.288646    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:54.292089    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:54.305302    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:54.374457    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:54.719003    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:54.786983    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:54.805117    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:54.874102    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:55.220183    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:55.286767    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:55.305355    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:55.374500    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:55.719864    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:55.787860    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:55.805122    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:55.873970    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:56.220387    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:56.287236    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:56.304604    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:52:56.305009    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:56.374229    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:56.719524    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:56.787424    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:56.789748    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:56.806102    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:56.874805    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:52:57.108005    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:57.108074    5049 retry.go:31] will retry after 5.852321959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:52:57.219717    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:57.288517    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:57.305506    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:57.374194    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:57.720536    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:57.787441    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:57.805180    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:57.874376    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:58.220309    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:58.289077    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:58.305403    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:58.374855    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:58.719928    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:58.786817    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:58.805763    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:58.875013    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:59.220228    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:59.287085    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:52:59.288982    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:52:59.305878    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:59.373905    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:52:59.719741    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:52:59.787992    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:52:59.805253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:52:59.874007    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:00.221543    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:00.290582    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:00.322977    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:00.374690    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:00.720033    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:00.789538    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:00.805824    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:00.874832    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:01.219585    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:01.287932    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:01.289471    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:01.305516    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:01.374289    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:01.720121    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:01.786962    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:01.805493    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:01.874266    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:02.220156    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:02.287498    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:02.305104    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:02.374111    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:02.720327    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:02.787040    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:02.804753    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:02.874968    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:02.961411    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:03.219452    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:03.287706    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:03.289981    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:03.308698    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:03.374821    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:03.720305    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1008 21:53:03.760910    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:03.760941    5049 retry.go:31] will retry after 7.84841166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:03.786711    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:03.805172    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:03.874068    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:04.220090    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:04.289025    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:04.314556    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:04.374512    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:04.719623    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:04.787260    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:04.805179    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:04.875598    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:05.220075    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:05.286972    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:05.305392    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:05.375178    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:05.720458    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:05.787419    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:05.789246    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:05.804953    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:05.874117    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:06.220277    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:06.286968    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:06.305033    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:06.374215    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:06.720084    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:06.786793    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:06.804930    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:06.873835    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:07.220205    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:07.287272    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:07.305285    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:07.374051    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:07.720293    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:07.788123    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:07.789435    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:07.805344    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:07.874491    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:08.219585    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:08.287565    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:08.305385    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:08.375126    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:08.719232    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:08.787362    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:08.804770    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:08.874779    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:09.220373    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:09.287589    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:09.305569    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:09.374478    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:09.719438    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:09.787238    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:09.804920    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:09.874103    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:10.219261    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:10.288192    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:10.288998    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:10.305559    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:10.374666    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:10.719666    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:10.787814    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:10.805564    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:10.874535    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:11.220194    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:11.287074    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:11.304987    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:11.373817    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:11.610211    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:11.719437    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:11.787476    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:11.804986    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:11.874338    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:12.220068    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:12.288701    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:12.292721    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:12.306394    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:12.375295    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1008 21:53:12.414754    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:12.414828    5049 retry.go:31] will retry after 5.188779325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:12.720015    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:12.788814    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:12.805625    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:12.874424    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:13.219410    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:13.287191    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:13.305059    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:13.374068    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:13.720225    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:13.788021    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:13.804889    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:13.874752    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:14.220149    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:14.288932    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:14.305399    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:14.374434    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:14.719416    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:14.787396    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1008 21:53:14.789651    5049 node_ready.go:57] node "addons-961288" has "Ready":"False" status (will retry)
	I1008 21:53:14.805268    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:14.874038    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:15.220176    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:15.287206    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:15.305314    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:15.374205    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:15.719249    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:15.787300    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:15.804671    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:15.874542    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:16.219157    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:16.286773    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:16.304819    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:16.374907    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:16.721963    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:16.787908    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:16.805776    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:16.874388    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:17.223589    5049 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1008 21:53:17.223665    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:17.295354    5049 node_ready.go:49] node "addons-961288" is "Ready"
	I1008 21:53:17.295424    5049 node_ready.go:38] duration metric: took 36.009455597s for node "addons-961288" to be "Ready" ...
	I1008 21:53:17.295451    5049 api_server.go:52] waiting for apiserver process to appear ...
	I1008 21:53:17.295538    5049 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 21:53:17.301535    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:17.328120    5049 api_server.go:72] duration metric: took 42.252435598s to wait for apiserver process to appear ...
	I1008 21:53:17.328147    5049 api_server.go:88] waiting for apiserver healthz status ...
	I1008 21:53:17.328165    5049 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1008 21:53:17.348062    5049 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1008 21:53:17.349361    5049 api_server.go:141] control plane version: v1.34.1
	I1008 21:53:17.349387    5049 api_server.go:131] duration metric: took 21.233702ms to wait for apiserver health ...
	I1008 21:53:17.349397    5049 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 21:53:17.355660    5049 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1008 21:53:17.355694    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:17.436890    5049 system_pods.go:59] 19 kube-system pods found
	I1008 21:53:17.436945    5049 system_pods.go:61] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:17.436956    5049 system_pods.go:61] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:17.436963    5049 system_pods.go:61] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending
	I1008 21:53:17.436972    5049 system_pods.go:61] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending
	I1008 21:53:17.436976    5049 system_pods.go:61] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:17.436988    5049 system_pods.go:61] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:17.436997    5049 system_pods.go:61] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:17.437016    5049 system_pods.go:61] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:17.437033    5049 system_pods.go:61] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:17.437055    5049 system_pods.go:61] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:17.437061    5049 system_pods.go:61] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:17.437070    5049 system_pods.go:61] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:17.437080    5049 system_pods.go:61] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending
	I1008 21:53:17.437093    5049 system_pods.go:61] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending
	I1008 21:53:17.437105    5049 system_pods.go:61] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:17.437110    5049 system_pods.go:61] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending
	I1008 21:53:17.437119    5049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.437129    5049 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending
	I1008 21:53:17.437134    5049 system_pods.go:61] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending
	I1008 21:53:17.437140    5049 system_pods.go:74] duration metric: took 87.737742ms to wait for pod list to return data ...
	I1008 21:53:17.437153    5049 default_sa.go:34] waiting for default service account to be created ...
	I1008 21:53:17.442182    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:17.499354    5049 default_sa.go:45] found service account: "default"
	I1008 21:53:17.499383    5049 default_sa.go:55] duration metric: took 62.223651ms for default service account to be created ...
	I1008 21:53:17.499394    5049 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 21:53:17.513684    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:17.513731    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:17.513748    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:17.513754    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending
	I1008 21:53:17.513763    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending
	I1008 21:53:17.513767    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:17.513772    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:17.513783    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:17.513787    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:17.513795    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:17.513813    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:17.513819    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:17.513825    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:17.513835    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending
	I1008 21:53:17.513839    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending
	I1008 21:53:17.513845    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:17.513850    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending
	I1008 21:53:17.513860    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.513868    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending
	I1008 21:53:17.513872    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending
	I1008 21:53:17.513897    5049 retry.go:31] will retry after 225.672799ms: missing components: kube-dns
	I1008 21:53:17.603976    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:17.767008    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:17.767045    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:17.767065    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:17.767075    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:17.767082    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending
	I1008 21:53:17.767091    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:17.767096    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:17.767100    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:17.767113    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:17.767120    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:17.767139    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:17.767145    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:17.767157    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:17.767164    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:17.767171    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:17.767180    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:17.767186    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:17.767198    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.767210    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:17.767222    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:17.767237    5049 retry.go:31] will retry after 315.53954ms: missing components: kube-dns
	I1008 21:53:17.767699    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:17.835784    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:17.835968    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:17.880988    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:18.089575    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:18.089614    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:18.089624    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:18.089668    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:18.089678    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:18.089686    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:18.089691    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:18.089699    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:18.089703    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:18.089709    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:18.089731    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:18.089736    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:18.089742    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:18.089755    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:18.089764    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:18.089788    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:18.089800    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:18.089807    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.089819    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.089825    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:18.089846    5049 retry.go:31] will retry after 435.438173ms: missing components: kube-dns
	I1008 21:53:18.248257    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:18.354990    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:18.355061    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:18.455065    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:18.531443    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:18.531482    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:18.531492    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:18.531500    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:18.531528    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:18.531539    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:18.531546    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:18.531550    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:18.531555    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:18.531568    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:18.531573    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:18.531587    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:18.531598    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:18.531605    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:18.531611    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:18.531625    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:18.531634    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:18.531643    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.531650    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.531665    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:18.531686    5049 retry.go:31] will retry after 410.644437ms: missing components: kube-dns
	I1008 21:53:18.721696    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:18.822189    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:18.822379    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:18.923861    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:18.949777    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:18.949874    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 21:53:18.949900    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:18.949944    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:18.949975    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:18.950005    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:18.950027    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:18.950056    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:18.950080    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:18.950110    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:18.950140    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:18.950178    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:18.950199    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:18.950226    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:18.950273    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:18.950304    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:18.950332    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:18.950359    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.950392    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:18.950428    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 21:53:18.950537    5049 retry.go:31] will retry after 475.838949ms: missing components: kube-dns
	I1008 21:53:18.984627    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.380610409s)
	W1008 21:53:18.984715    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:18.984756    5049 retry.go:31] will retry after 14.372601313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:19.220359    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:19.295253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:19.322740    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:19.422124    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:19.434130    5049 system_pods.go:86] 19 kube-system pods found
	I1008 21:53:19.436339    5049 system_pods.go:89] "coredns-66bc5c9577-44hjj" [b45d78d5-cdda-4ac0-86d0-2258da8451cd] Running
	I1008 21:53:19.436376    5049 system_pods.go:89] "csi-hostpath-attacher-0" [752485b2-dc65-4744-8d9f-2848cd7bdeae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1008 21:53:19.436386    5049 system_pods.go:89] "csi-hostpath-resizer-0" [fdcceb8d-629d-4735-af3c-3155701b0572] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1008 21:53:19.436396    5049 system_pods.go:89] "csi-hostpathplugin-ncxdq" [436acc30-450f-4780-a607-51bd0ab90b58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1008 21:53:19.436400    5049 system_pods.go:89] "etcd-addons-961288" [2cc108d0-9181-47e0-a069-718c8a84ead9] Running
	I1008 21:53:19.436406    5049 system_pods.go:89] "kindnet-6rwkn" [d2031588-b25a-449d-8dee-4d90339a3890] Running
	I1008 21:53:19.436410    5049 system_pods.go:89] "kube-apiserver-addons-961288" [72c300f5-3893-4014-a67b-5d05083173ee] Running
	I1008 21:53:19.436415    5049 system_pods.go:89] "kube-controller-manager-addons-961288" [07db77b1-56c5-42d8-a8e3-e9b84c7366a9] Running
	I1008 21:53:19.436422    5049 system_pods.go:89] "kube-ingress-dns-minikube" [78d4e408-820d-4b5e-981d-ee448484afc1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1008 21:53:19.436426    5049 system_pods.go:89] "kube-proxy-xq75f" [f7298956-67b0-42a0-bd18-f1bdf934f35b] Running
	I1008 21:53:19.436431    5049 system_pods.go:89] "kube-scheduler-addons-961288" [6a774db3-e79b-486f-a70c-5c6891dfacfb] Running
	I1008 21:53:19.436438    5049 system_pods.go:89] "metrics-server-85b7d694d7-kwc69" [56d8dd7e-2eef-4585-904d-f0fa31b79949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1008 21:53:19.436446    5049 system_pods.go:89] "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1008 21:53:19.436456    5049 system_pods.go:89] "registry-66898fdd98-sbgsn" [4a98c646-e446-4dd0-aaad-a11f3d44e250] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1008 21:53:19.436462    5049 system_pods.go:89] "registry-creds-764b6fb674-jqkzb" [8c01014d-f946-46ff-a3a7-33fb2c409449] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1008 21:53:19.436470    5049 system_pods.go:89] "registry-proxy-f8ff7" [2ffd1993-3424-4668-9aea-141c903307ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1008 21:53:19.436478    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5cc8z" [2603d7cb-0dae-4e79-9c8a-a9bac0022859] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:19.436485    5049 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vw7qn" [3a54f658-aa4c-4378-af4b-b217a2f4ad44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1008 21:53:19.436491    5049 system_pods.go:89] "storage-provisioner" [0eeed6d3-5048-4aa6-95af-c29fa788d5c6] Running
	I1008 21:53:19.436500    5049 system_pods.go:126] duration metric: took 1.937099519s to wait for k8s-apps to be running ...
	I1008 21:53:19.436507    5049 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 21:53:19.436565    5049 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 21:53:19.452123    5049 system_svc.go:56] duration metric: took 15.607346ms WaitForService to wait for kubelet
	I1008 21:53:19.452203    5049 kubeadm.go:586] duration metric: took 44.376522674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 21:53:19.452238    5049 node_conditions.go:102] verifying NodePressure condition ...
	I1008 21:53:19.455605    5049 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 21:53:19.455639    5049 node_conditions.go:123] node cpu capacity is 2
	I1008 21:53:19.455653    5049 node_conditions.go:105] duration metric: took 3.396177ms to run NodePressure ...
	I1008 21:53:19.455667    5049 start.go:241] waiting for startup goroutines ...
	I1008 21:53:19.719830    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:19.788179    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:19.805468    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:19.874886    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:20.220168    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:20.287114    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:20.305601    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:20.374952    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:20.720566    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:20.822315    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:20.822787    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:20.875314    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:21.220330    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:21.287613    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:21.305322    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:21.374688    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:21.720632    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:21.787963    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:21.805912    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:21.874379    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:22.219752    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:22.288081    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:22.305693    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:22.375036    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:22.720724    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:22.787639    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:22.805667    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:22.874493    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:23.220521    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:23.287513    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:23.306310    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:23.406745    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:23.720310    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:23.787309    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:23.806037    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:23.874743    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:24.220470    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:24.287343    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:24.308794    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:24.375157    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:24.719831    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:24.787871    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:24.806189    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:24.874843    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:25.220379    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:25.288080    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:25.305542    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:25.374800    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:25.723037    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:25.788101    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:25.805445    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:25.874989    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:26.221019    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:26.289342    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:26.306316    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:26.379551    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:26.719698    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:26.787252    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:26.805168    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:26.886566    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:27.220075    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:27.287021    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:27.306375    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:27.378327    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:27.721722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:27.787946    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:27.805692    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:27.875590    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:28.220001    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:28.287571    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:28.306499    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:28.380233    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:28.720909    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:28.791710    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:28.806840    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:28.876414    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:29.219507    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:29.287605    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:29.306299    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:29.374383    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:29.719724    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:29.787756    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:29.805745    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:29.874661    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:30.219788    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:30.288146    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:30.305211    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:30.374331    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:30.720313    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:30.787030    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:30.805837    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:30.874348    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:31.220876    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:31.288053    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:31.305798    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:31.374619    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:31.720338    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:31.787260    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:31.805730    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:31.874556    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:32.219882    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:32.287575    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:32.305561    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:32.375508    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:32.720055    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:32.787479    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:32.805503    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:32.874796    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:33.220416    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:33.287616    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:33.306583    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:33.357846    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:33.374660    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:33.727197    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:33.787931    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:33.805710    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:33.875465    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:34.220401    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:34.320944    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:34.321760    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:34.382016    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:34.603130    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.245251462s)
	W1008 21:53:34.603179    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:34.603199    5049 retry.go:31] will retry after 14.7472332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:34.720642    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:34.787502    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:34.805802    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:34.874879    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:35.220069    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:35.288124    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:35.305313    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:35.373779    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:35.720036    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:35.787838    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:35.805664    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:35.874331    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:36.220431    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:36.287061    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:36.305227    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:36.374850    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:36.721738    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:36.788289    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:36.806208    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:36.874715    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:37.219803    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:37.287666    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:37.305714    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:37.380658    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:37.720722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:37.787253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:37.806173    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:37.874979    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:38.220952    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:38.287754    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:38.306583    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:38.374841    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:38.719963    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:38.787806    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:38.806106    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:38.875058    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:39.220567    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:39.287431    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:39.305659    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:39.374845    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:39.720588    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:39.787486    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:39.805969    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:39.874266    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:40.219925    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:40.287991    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:40.305176    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:40.374374    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:40.720022    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:40.786972    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:40.805415    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:40.874677    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:41.219534    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:41.287479    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:41.305731    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:41.375353    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:41.719819    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:41.787749    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:41.806435    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:41.874996    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:42.226068    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:42.288892    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:42.307542    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:42.375919    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:42.720992    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:42.788382    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:42.805819    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:42.875408    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:43.219639    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:43.293390    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:43.306810    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:43.379240    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:43.720908    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:43.788266    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:43.805545    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:43.875887    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:44.222864    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:44.288504    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:44.306895    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:44.375659    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:44.720662    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:44.822601    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:44.823648    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:44.922062    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:45.221301    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:45.288524    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:45.310276    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:45.375487    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:45.720326    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:45.787641    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:45.805710    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:45.874874    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:46.220585    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:46.287602    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:46.305781    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:46.374827    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:46.719831    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:46.788729    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:46.805848    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:46.873877    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:47.220226    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:47.287306    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:47.305607    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:47.375045    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:47.722134    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:47.787928    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:47.806000    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:47.874123    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:48.220253    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:48.287278    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:48.305957    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:48.374861    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:48.724890    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:48.795243    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:48.807550    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:48.875309    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:49.220189    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:49.286802    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:49.305718    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:49.350693    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:53:49.373736    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:49.720086    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:49.787065    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:49.805167    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:49.875059    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:50.220156    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:50.287475    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:50.306491    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:50.375334    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:50.621158    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.270430064s)
	W1008 21:53:50.621248    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:50.621282    5049 retry.go:31] will retry after 17.302032834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1008 21:53:50.721155    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:50.787724    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:50.806662    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:50.875892    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:51.220758    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:51.288324    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:51.308573    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:51.373861    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:51.720719    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:51.787772    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:51.806411    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:51.875370    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:52.219145    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:52.287096    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:52.306783    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:52.374940    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:52.720538    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:52.787777    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:52.806554    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:52.874638    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:53.219386    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:53.287258    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:53.305401    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:53.374019    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:53.719645    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:53.787771    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:53.806313    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:53.874327    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:54.220384    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:54.287689    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:54.311587    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:54.374224    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:54.719913    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:54.787202    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:54.805078    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:54.873904    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:55.219955    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:55.287022    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:55.305894    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:55.374589    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:55.719706    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:55.788249    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:55.805327    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:55.876795    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:56.220043    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:56.287016    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:56.305848    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:56.374747    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:56.720182    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:56.787949    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:56.806614    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:56.874746    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:57.219736    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:57.287555    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:57.305573    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:57.374231    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:57.720386    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:57.787601    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:57.806174    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:57.874027    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:58.220386    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:58.287848    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:58.306000    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:58.374856    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:58.719326    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:58.787432    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:58.805977    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:58.875079    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:59.221208    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:59.287190    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:59.305753    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:59.374825    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:53:59.720121    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:53:59.787319    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:53:59.805170    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:53:59.873888    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:00.248445    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:00.355667    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:00.356813    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:00.376533    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:00.720091    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:00.821012    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:00.821463    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:00.874730    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:01.222103    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:01.323193    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:01.323357    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:01.374488    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:01.719745    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:01.787602    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:01.805594    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:01.874291    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:02.221141    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:02.323524    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:02.323671    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:02.375184    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:02.719938    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:02.820749    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:02.820568    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:02.875535    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:03.220120    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:03.287175    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:03.305678    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:03.374833    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:03.719983    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:03.787154    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:03.805289    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:03.874119    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:04.219624    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:04.288108    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:04.305169    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:04.374742    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:04.720262    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:04.822132    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:04.822363    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:04.874681    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:05.220066    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:05.287145    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:05.305955    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:05.374998    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:05.719522    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:05.787616    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:05.805872    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:05.874187    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:06.219460    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:06.287837    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:06.305444    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:06.374903    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:06.720517    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:06.789010    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:06.805856    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:06.873903    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:07.221534    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1008 21:54:07.287908    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:07.304707    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:07.374816    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:07.720912    5049 kapi.go:107] duration metric: took 1m27.504464829s to wait for kubernetes.io/minikube-addons=registry ...
	I1008 21:54:07.787090    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:07.804977    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:07.873923    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:07.923831    5049 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1008 21:54:08.287107    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:08.305869    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:08.377568    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:08.787870    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:08.890669    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:08.891199    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:09.287711    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:09.312005    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:09.350486    5049 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.426615075s)
	W1008 21:54:09.350560    5049 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1008 21:54:09.350657    5049 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1008 21:54:09.375222    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:09.787404    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:09.805974    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:09.873870    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:10.287137    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:10.305857    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:10.374149    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:10.787141    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:10.805015    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:10.874130    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:11.287249    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:11.305145    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:11.374082    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:11.787120    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:11.805907    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:11.873678    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:12.288532    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:12.306466    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:12.375267    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:12.787689    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:12.806188    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:12.875433    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:13.287302    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:13.305160    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:13.374825    5049 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1008 21:54:13.787409    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:13.806704    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:13.875219    5049 kapi.go:107] duration metric: took 1m32.004327123s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1008 21:54:14.288207    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:14.306111    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:14.787036    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:14.805178    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:15.287722    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:15.306527    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:15.787866    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:15.807056    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:16.287804    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:16.305557    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:16.786968    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:16.805300    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:17.288219    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:17.305499    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:17.802383    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:17.808753    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:18.287274    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:18.305318    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:18.788911    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:18.805953    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:19.287842    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:19.391549    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:19.788261    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:19.805943    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:20.287645    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:20.305504    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:20.794183    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:20.805280    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:21.289146    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:21.311351    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:21.787909    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:21.805456    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:22.288128    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:22.305521    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:22.787327    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:22.805381    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:23.288156    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:23.305115    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:23.788075    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:23.805594    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:24.287567    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:24.306339    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:24.792439    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:24.806205    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:25.288004    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1008 21:54:25.305186    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:25.789717    5049 kapi.go:107] duration metric: took 1m41.5057499s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1008 21:54:25.793157    5049 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-961288 cluster.
	I1008 21:54:25.796078    5049 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1008 21:54:25.798998    5049 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1008 21:54:25.812639    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:26.306246    5049 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1008 21:54:26.813055    5049 kapi.go:107] duration metric: took 1m44.511130471s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1008 21:54:26.816171    5049 out.go:179] * Enabled addons: cloud-spanner, nvidia-device-plugin, registry-creds, default-storageclass, ingress-dns, storage-provisioner, amd-gpu-device-plugin, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1008 21:54:26.820126    5049 addons.go:514] duration metric: took 1m51.743976161s for enable addons: enabled=[cloud-spanner nvidia-device-plugin registry-creds default-storageclass ingress-dns storage-provisioner amd-gpu-device-plugin metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1008 21:54:26.820177    5049 start.go:246] waiting for cluster config update ...
	I1008 21:54:26.820199    5049 start.go:255] writing updated cluster config ...
	I1008 21:54:26.820517    5049 ssh_runner.go:195] Run: rm -f paused
	I1008 21:54:26.825970    5049 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 21:54:26.830723    5049 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-44hjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.840820    5049 pod_ready.go:94] pod "coredns-66bc5c9577-44hjj" is "Ready"
	I1008 21:54:26.840849    5049 pod_ready.go:86] duration metric: took 10.092612ms for pod "coredns-66bc5c9577-44hjj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.843863    5049 pod_ready.go:83] waiting for pod "etcd-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.848319    5049 pod_ready.go:94] pod "etcd-addons-961288" is "Ready"
	I1008 21:54:26.848350    5049 pod_ready.go:86] duration metric: took 4.460194ms for pod "etcd-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.850789    5049 pod_ready.go:83] waiting for pod "kube-apiserver-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.857871    5049 pod_ready.go:94] pod "kube-apiserver-addons-961288" is "Ready"
	I1008 21:54:26.857896    5049 pod_ready.go:86] duration metric: took 7.07442ms for pod "kube-apiserver-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:26.860444    5049 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:27.230174    5049 pod_ready.go:94] pod "kube-controller-manager-addons-961288" is "Ready"
	I1008 21:54:27.230203    5049 pod_ready.go:86] duration metric: took 369.733345ms for pod "kube-controller-manager-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:27.430597    5049 pod_ready.go:83] waiting for pod "kube-proxy-xq75f" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:27.830628    5049 pod_ready.go:94] pod "kube-proxy-xq75f" is "Ready"
	I1008 21:54:27.830702    5049 pod_ready.go:86] duration metric: took 400.040344ms for pod "kube-proxy-xq75f" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:28.030548    5049 pod_ready.go:83] waiting for pod "kube-scheduler-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:28.430413    5049 pod_ready.go:94] pod "kube-scheduler-addons-961288" is "Ready"
	I1008 21:54:28.430445    5049 pod_ready.go:86] duration metric: took 399.864471ms for pod "kube-scheduler-addons-961288" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 21:54:28.430460    5049 pod_ready.go:40] duration metric: took 1.60445921s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 21:54:28.835165    5049 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 21:54:28.838442    5049 out.go:179] * Done! kubectl is now configured to use "addons-961288" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 21:54:57 addons-961288 crio[829]: time="2025-10-08T21:54:57.782435176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:54:57 addons-961288 crio[829]: time="2025-10-08T21:54:57.784763658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:54:57 addons-961288 crio[829]: time="2025-10-08T21:54:57.801044168Z" level=info msg="Created container 10af0a53ea60b8ecab47ddd6b526c6ebd443b6a5a37b52d8a80b88aa94dabe58: default/test-local-path/busybox" id=6af525a8-c812-4433-a6c7-c4a8368e46ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 21:54:57 addons-961288 crio[829]: time="2025-10-08T21:54:57.802220799Z" level=info msg="Starting container: 10af0a53ea60b8ecab47ddd6b526c6ebd443b6a5a37b52d8a80b88aa94dabe58" id=c88b9470-9b8f-4a1d-90d5-83c114ca1262 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 21:54:57 addons-961288 crio[829]: time="2025-10-08T21:54:57.805296559Z" level=info msg="Started container" PID=5364 containerID=10af0a53ea60b8ecab47ddd6b526c6ebd443b6a5a37b52d8a80b88aa94dabe58 description=default/test-local-path/busybox id=c88b9470-9b8f-4a1d-90d5-83c114ca1262 name=/runtime.v1.RuntimeService/StartContainer sandboxID=43353b2a0e890180f269014890b7cd5bbc5b22637088c15e3357a32189041c57
	Oct 08 21:54:58 addons-961288 crio[829]: time="2025-10-08T21:54:58.973980559Z" level=info msg="Stopping pod sandbox: 43353b2a0e890180f269014890b7cd5bbc5b22637088c15e3357a32189041c57" id=0494aac5-380f-47d5-aff1-c42e5a33a7db name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 21:54:58 addons-961288 crio[829]: time="2025-10-08T21:54:58.974244782Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:43353b2a0e890180f269014890b7cd5bbc5b22637088c15e3357a32189041c57 UID:b549c674-ee70-4711-a6ff-1deafa8d86a5 NetNS:/var/run/netns/cbd4ebb0-5315-424d-9e6e-733bfe06fa3e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001c8e3b8}] Aliases:map[]}"
	Oct 08 21:54:58 addons-961288 crio[829]: time="2025-10-08T21:54:58.974394756Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 08 21:54:59 addons-961288 crio[829]: time="2025-10-08T21:54:59.008440439Z" level=info msg="Stopped pod sandbox: 43353b2a0e890180f269014890b7cd5bbc5b22637088c15e3357a32189041c57" id=0494aac5-380f-47d5-aff1-c42e5a33a7db name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.556436173Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963/POD" id=711c70c8-6fe2-4bd0-9a74-40362192c924 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.556512252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.595120786Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963 Namespace:local-path-storage ID:dd4d8d68df92ebb75f5d8f6402c5f5fd606c9f3b950240d051c5e30a539da3b4 UID:3df08afa-b177-4167-930b-b39fb437b0e2 NetNS:/var/run/netns/ce29fb62-468e-4324-bf64-3b4b76d98688 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b44dc8}] Aliases:map[]}"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.595162813Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963 to CNI network \"kindnet\" (type=ptp)"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.621406995Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963 Namespace:local-path-storage ID:dd4d8d68df92ebb75f5d8f6402c5f5fd606c9f3b950240d051c5e30a539da3b4 UID:3df08afa-b177-4167-930b-b39fb437b0e2 NetNS:/var/run/netns/ce29fb62-468e-4324-bf64-3b4b76d98688 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b44dc8}] Aliases:map[]}"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.621902326Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963 for CNI network kindnet (type=ptp)"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.633470414Z" level=info msg="Ran pod sandbox dd4d8d68df92ebb75f5d8f6402c5f5fd606c9f3b950240d051c5e30a539da3b4 with infra container: local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963/POD" id=711c70c8-6fe2-4bd0-9a74-40362192c924 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.639219553Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=93778691-b24a-468a-82de-3bbb8a5ecdab name=/runtime.v1.ImageService/ImageStatus
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.640640206Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=362aeb2e-4a1e-4dcb-acd8-4c508ba844f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.648016095Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963/helper-pod" id=6b2add1d-4fe1-48ed-9931-6a1a772bcd2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.648442453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.668138878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.668825988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.693198365Z" level=info msg="Created container ee7fee4f58fb327292019a6aaeab0d2bb32a7ea28977a36a7e6578e85a29d2bb: local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963/helper-pod" id=6b2add1d-4fe1-48ed-9931-6a1a772bcd2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.699553891Z" level=info msg="Starting container: ee7fee4f58fb327292019a6aaeab0d2bb32a7ea28977a36a7e6578e85a29d2bb" id=c896e606-244f-4858-bb74-49ec64008786 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 21:55:01 addons-961288 crio[829]: time="2025-10-08T21:55:01.704751109Z" level=info msg="Started container" PID=5515 containerID=ee7fee4f58fb327292019a6aaeab0d2bb32a7ea28977a36a7e6578e85a29d2bb description=local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963/helper-pod id=c896e606-244f-4858-bb74-49ec64008786 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd4d8d68df92ebb75f5d8f6402c5f5fd606c9f3b950240d051c5e30a539da3b4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	ee7fee4f58fb3       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   dd4d8d68df92e       helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963   local-path-storage
	10af0a53ea60b       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago        Exited              busybox                                  0                   43353b2a0e890       test-local-path                                              default
	22860663fcde4       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            9 seconds ago        Exited              helper-pod                               0                   4d46227fde43a       helper-pod-create-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963   local-path-storage
	11383bed5f922       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   e053760515e17       busybox                                                      default
	1b176619cba2b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          36 seconds ago       Running             csi-snapshotter                          0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                                     kube-system
	844fd610070b1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 37 seconds ago       Running             gcp-auth                                 0                   d7923dab72b20       gcp-auth-78565c9fb4-7bx27                                    gcp-auth
	6914889d561d2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          40 seconds ago       Running             csi-provisioner                          0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                                     kube-system
	04cded645e0f5       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            41 seconds ago       Running             liveness-probe                           0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                                     kube-system
	53f8bbdff2a61       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           42 seconds ago       Running             hostpath                                 0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                                     kube-system
	9e0cfc150cb8b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                44 seconds ago       Running             node-driver-registrar                    0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                                     kube-system
	0d2846783262d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            45 seconds ago       Running             gadget                                   0                   1ebe1d90a66bb       gadget-dz94f                                                 gadget
	cb1ec346ebb7f       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             48 seconds ago       Exited              patch                                    3                   4961c56bd42fc       gcp-auth-certs-patch-wl97z                                   gcp-auth
	887c67d1d3ec6       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             49 seconds ago       Running             controller                               0                   6274a4e2f0743       ingress-nginx-controller-9cc49f96f-p8cl5                     ingress-nginx
	2ee4ab9224d4e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              55 seconds ago       Running             registry-proxy                           0                   b10801d52efbc       registry-proxy-f8ff7                                         kube-system
	7288cfd067650       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              59 seconds ago       Running             csi-resizer                              0                   0d5dc5535135f       csi-hostpath-resizer-0                                       kube-system
	83d5f8807dd5a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   63500abcdcf30       snapshot-controller-7d9fbc56b8-vw7qn                         kube-system
	ea664b7087a7a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              patch                                    0                   47d4059f39887       ingress-nginx-admission-patch-kp9qj                          ingress-nginx
	f33f8d94ddc26       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              create                                   0                   42c41d1283d40       gcp-auth-certs-create-rfsgh                                  gcp-auth
	3627170a702a8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   a865109900bcf       local-path-provisioner-648f6765c9-mlxh6                      local-path-storage
	d1380cc21067a       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   a47f2c58c42e3       nvidia-device-plugin-daemonset-fsrx4                         kube-system
	ff8f96680aca4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   3fd38ea7b3480       csi-hostpathplugin-ncxdq                                     kube-system
	39cf7b8150b29       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           About a minute ago   Running             registry                                 0                   66acba301eda5       registry-66898fdd98-sbgsn                                    kube-system
	e989c71cd7b8b       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   fa0b34a1a868c       kube-ingress-dns-minikube                                    kube-system
	05f18db74839d       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   1600f0c0374a3       yakd-dashboard-5ff678cb9-vhcqh                               yakd-dashboard
	b0a301ec5750f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              create                                   0                   3bcb2ca0af46e       ingress-nginx-admission-create-9d26x                         ingress-nginx
	dad9a565111fe       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   0c39840ea7a89       csi-hostpath-attacher-0                                      kube-system
	a1add4f38e67c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   b148989f3e511       metrics-server-85b7d694d7-kwc69                              kube-system
	b6beebcffc7ee       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   74163529f02f2       snapshot-controller-7d9fbc56b8-5cc8z                         kube-system
	2d42cedd8f1ba       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   2520491499e00       cloud-spanner-emulator-86bd5cbb97-46cw7                      default
	d80e987069480       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   d082d2eb58ad9       storage-provisioner                                          kube-system
	d8507d936e30a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   f6ddd1c88ebc6       coredns-66bc5c9577-44hjj                                     kube-system
	3d83973804a8c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   70f851a80c361       kindnet-6rwkn                                                kube-system
	02c59261c1cab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   da932c3af748e       kube-proxy-xq75f                                             kube-system
	12f7556456c3b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   ae58ce69f3633       kube-scheduler-addons-961288                                 kube-system
	c21bc28053396       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   4b85467e19714       kube-apiserver-addons-961288                                 kube-system
	6a475d38a34a2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   302397bedbe0e       etcd-addons-961288                                           kube-system
	a2d50687425bc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   6aee562fdfb4e       kube-controller-manager-addons-961288                        kube-system
	
	
	==> coredns [d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4] <==
	[INFO] 10.244.0.11:33758 - 4619 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001642118s
	[INFO] 10.244.0.11:33758 - 7705 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000116809s
	[INFO] 10.244.0.11:33758 - 44340 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000323735s
	[INFO] 10.244.0.11:51714 - 31411 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00019059s
	[INFO] 10.244.0.11:51714 - 31214 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000413549s
	[INFO] 10.244.0.11:39033 - 36948 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000244588s
	[INFO] 10.244.0.11:39033 - 36767 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000209051s
	[INFO] 10.244.0.11:45557 - 25295 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103811s
	[INFO] 10.244.0.11:45557 - 24860 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000151575s
	[INFO] 10.244.0.11:40368 - 62307 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001388184s
	[INFO] 10.244.0.11:40368 - 62495 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001497017s
	[INFO] 10.244.0.11:55237 - 8743 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000117909s
	[INFO] 10.244.0.11:55237 - 8563 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008215s
	[INFO] 10.244.0.21:43278 - 13572 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373697s
	[INFO] 10.244.0.21:50712 - 3224 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000538284s
	[INFO] 10.244.0.21:53471 - 38019 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000146052s
	[INFO] 10.244.0.21:39073 - 32394 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012686s
	[INFO] 10.244.0.21:35754 - 29539 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119558s
	[INFO] 10.244.0.21:47768 - 50837 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085153s
	[INFO] 10.244.0.21:53416 - 8047 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002146564s
	[INFO] 10.244.0.21:44978 - 20256 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002307517s
	[INFO] 10.244.0.21:48873 - 23376 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002586107s
	[INFO] 10.244.0.21:42710 - 40745 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002571355s
	[INFO] 10.244.0.23:49432 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196621s
	[INFO] 10.244.0.23:58580 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112477s
	
	
	==> describe nodes <==
	Name:               addons-961288
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-961288
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=addons-961288
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T21_52_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-961288
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-961288"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 21:52:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-961288
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 21:54:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 21:54:32 +0000   Wed, 08 Oct 2025 21:52:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 21:54:32 +0000   Wed, 08 Oct 2025 21:52:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 21:54:32 +0000   Wed, 08 Oct 2025 21:52:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 21:54:32 +0000   Wed, 08 Oct 2025 21:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-961288
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 37daf547d8bc4acebb6a0460dc06380e
	  System UUID:                b425271d-6922-48ac-8987-93fee9234cf0
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-86bd5cbb97-46cw7                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-dz94f                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gcp-auth                    gcp-auth-78565c9fb4-7bx27                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-p8cl5                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m21s
	  kube-system                 coredns-66bc5c9577-44hjj                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 csi-hostpathplugin-ncxdq                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 etcd-addons-961288                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m32s
	  kube-system                 kindnet-6rwkn                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-addons-961288                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-addons-961288                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-xq75f                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-addons-961288                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 metrics-server-85b7d694d7-kwc69                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m23s
	  kube-system                 nvidia-device-plugin-daemonset-fsrx4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 registry-66898fdd98-sbgsn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 registry-creds-764b6fb674-jqkzb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 registry-proxy-f8ff7                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 snapshot-controller-7d9fbc56b8-5cc8z                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-vw7qn                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  local-path-storage          helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-mlxh6                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-vhcqh                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x9 over 2m40s)  kubelet          Node addons-961288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node addons-961288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x7 over 2m40s)  kubelet          Node addons-961288 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s                  kubelet          Node addons-961288 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s                  kubelet          Node addons-961288 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s                  kubelet          Node addons-961288 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m28s                  node-controller  Node addons-961288 event: Registered Node addons-961288 in Controller
	  Normal   NodeReady                106s                   kubelet          Node addons-961288 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015330] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.500107] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036203] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743682] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.166411] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 8 21:52] hrtimer: interrupt took 47692610 ns
	[ +22.956892] overlayfs: idmapped layers are currently not supported
	[  +0.073462] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233] <==
	{"level":"warn","ts":"2025-10-08T21:52:26.086800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.087627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.117820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.136808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.148935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.176217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.194608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.211679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.232989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.244647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.265898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.297965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.306467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.316220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.346701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.377720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.400169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.418076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:26.531509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:42.322512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:52:42.346151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.252602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.266810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.301588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T21:53:04.319224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [844fd610070b1570ffb554c8c62f56928ee0b1145a929df7408a160502839834] <==
	2025/10/08 21:54:25 GCP Auth Webhook started!
	2025/10/08 21:54:29 Ready to marshal response ...
	2025/10/08 21:54:29 Ready to write response ...
	2025/10/08 21:54:29 Ready to marshal response ...
	2025/10/08 21:54:29 Ready to write response ...
	2025/10/08 21:54:29 Ready to marshal response ...
	2025/10/08 21:54:29 Ready to write response ...
	2025/10/08 21:54:50 Ready to marshal response ...
	2025/10/08 21:54:50 Ready to write response ...
	2025/10/08 21:54:51 Ready to marshal response ...
	2025/10/08 21:54:51 Ready to write response ...
	2025/10/08 21:54:51 Ready to marshal response ...
	2025/10/08 21:54:51 Ready to write response ...
	2025/10/08 21:55:00 Ready to marshal response ...
	2025/10/08 21:55:00 Ready to write response ...
	
	
	==> kernel <==
	 21:55:03 up 37 min,  0 user,  load average: 2.71, 1.54, 0.64
	Linux addons-961288 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6] <==
	I1008 21:53:07.424100       1 metrics.go:72] Registering metrics
	I1008 21:53:07.424155       1 controller.go:711] "Syncing nftables rules"
	E1008 21:53:07.424715       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1008 21:53:16.423217       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:53:16.423289       1 main.go:301] handling current node
	I1008 21:53:26.421770       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:53:26.421809       1 main.go:301] handling current node
	I1008 21:53:36.420774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:53:36.420817       1 main.go:301] handling current node
	I1008 21:53:46.420748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:53:46.420778       1 main.go:301] handling current node
	I1008 21:53:56.421773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:53:56.421843       1 main.go:301] handling current node
	I1008 21:54:06.421739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:54:06.421815       1 main.go:301] handling current node
	I1008 21:54:16.420812       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:54:16.420899       1 main.go:301] handling current node
	I1008 21:54:26.421731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:54:26.421760       1 main.go:301] handling current node
	I1008 21:54:36.420708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:54:36.420756       1 main.go:301] handling current node
	I1008 21:54:46.428279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:54:46.428314       1 main.go:301] handling current node
	I1008 21:54:56.421734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 21:54:56.421771       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c] <==
	I1008 21:52:42.068737       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1008 21:52:42.218179       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.101.232.110"}
	W1008 21:52:42.319294       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:52:42.339672       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1008 21:52:44.152121       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.229.178"}
	W1008 21:53:04.252012       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:04.266815       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:04.301441       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:04.316823       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1008 21:53:17.003488       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.229.178:443: connect: connection refused
	E1008 21:53:17.003570       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.229.178:443: connect: connection refused" logger="UnhandledError"
	W1008 21:53:17.005129       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.229.178:443: connect: connection refused
	E1008 21:53:17.005199       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.229.178:443: connect: connection refused" logger="UnhandledError"
	W1008 21:53:17.090452       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.229.178:443: connect: connection refused
	E1008 21:53:17.090504       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.229.178:443: connect: connection refused" logger="UnhandledError"
	E1008 21:53:37.464753       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.229.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.229.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.229.237:443: connect: connection refused" logger="UnhandledError"
	W1008 21:53:37.466218       1 handler_proxy.go:99] no RequestInfo found in the context
	E1008 21:53:37.466278       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1008 21:53:37.544922       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1008 21:53:37.582587       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1008 21:54:39.476370       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46598: use of closed network connection
	E1008 21:54:39.627218       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46616: use of closed network connection
	
	
	==> kube-controller-manager [a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235] <==
	I1008 21:52:34.282117       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 21:52:34.283274       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 21:52:34.289767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 21:52:34.284581       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 21:52:34.289932       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 21:52:34.290181       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 21:52:34.284594       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 21:52:34.284611       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 21:52:34.284629       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 21:52:34.284640       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 21:52:34.284690       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 21:52:34.284935       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 21:52:34.285076       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 21:52:34.293844       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-961288" podCIDRs=["10.244.0.0/24"]
	E1008 21:52:39.588675       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1008 21:53:04.244983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 21:53:04.245207       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1008 21:53:04.245252       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1008 21:53:04.290938       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1008 21:53:04.295096       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1008 21:53:04.345901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 21:53:04.395567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 21:53:19.238326       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1008 21:53:34.351200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1008 21:53:34.411810       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd] <==
	I1008 21:52:36.266014       1 server_linux.go:53] "Using iptables proxy"
	I1008 21:52:36.343332       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 21:52:36.443490       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 21:52:36.443533       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1008 21:52:36.443613       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 21:52:36.499863       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 21:52:36.499910       1 server_linux.go:132] "Using iptables Proxier"
	I1008 21:52:36.515830       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 21:52:36.530062       1 server.go:527] "Version info" version="v1.34.1"
	I1008 21:52:36.530088       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 21:52:36.531946       1 config.go:200] "Starting service config controller"
	I1008 21:52:36.531960       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 21:52:36.531985       1 config.go:106] "Starting endpoint slice config controller"
	I1008 21:52:36.531990       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 21:52:36.532001       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 21:52:36.532005       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 21:52:36.532638       1 config.go:309] "Starting node config controller"
	I1008 21:52:36.532646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 21:52:36.532652       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 21:52:36.632570       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 21:52:36.632621       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 21:52:36.632659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b] <==
	E1008 21:52:27.444580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 21:52:27.444625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 21:52:27.448241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 21:52:27.448519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 21:52:27.448707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 21:52:27.448808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 21:52:27.449002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 21:52:27.449103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 21:52:27.449259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 21:52:27.449377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 21:52:28.273711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 21:52:28.303588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 21:52:28.311644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1008 21:52:28.337273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 21:52:28.394807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 21:52:28.416533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 21:52:28.459145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 21:52:28.461462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 21:52:28.480047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 21:52:28.482761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 21:52:28.592458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 21:52:28.611652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 21:52:28.637146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 21:52:28.646613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1008 21:52:31.036660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 21:54:59 addons-961288 kubelet[1286]: I1008 21:54:59.194786    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b549c674-ee70-4711-a6ff-1deafa8d86a5-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963" (OuterVolumeSpecName: "data") pod "b549c674-ee70-4711-a6ff-1deafa8d86a5" (UID: "b549c674-ee70-4711-a6ff-1deafa8d86a5"). InnerVolumeSpecName "pvc-8e4ef856-8168-49ac-bec5-fd30ac333963". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 08 21:54:59 addons-961288 kubelet[1286]: I1008 21:54:59.195118    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b549c674-ee70-4711-a6ff-1deafa8d86a5-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b549c674-ee70-4711-a6ff-1deafa8d86a5" (UID: "b549c674-ee70-4711-a6ff-1deafa8d86a5"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 08 21:54:59 addons-961288 kubelet[1286]: I1008 21:54:59.197859    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b549c674-ee70-4711-a6ff-1deafa8d86a5-kube-api-access-9x6v8" (OuterVolumeSpecName: "kube-api-access-9x6v8") pod "b549c674-ee70-4711-a6ff-1deafa8d86a5" (UID: "b549c674-ee70-4711-a6ff-1deafa8d86a5"). InnerVolumeSpecName "kube-api-access-9x6v8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 08 21:54:59 addons-961288 kubelet[1286]: I1008 21:54:59.296427    1286 reconciler_common.go:299] "Volume detached for volume \"pvc-8e4ef856-8168-49ac-bec5-fd30ac333963\" (UniqueName: \"kubernetes.io/host-path/b549c674-ee70-4711-a6ff-1deafa8d86a5-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:54:59 addons-961288 kubelet[1286]: I1008 21:54:59.296477    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9x6v8\" (UniqueName: \"kubernetes.io/projected/b549c674-ee70-4711-a6ff-1deafa8d86a5-kube-api-access-9x6v8\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:54:59 addons-961288 kubelet[1286]: I1008 21:54:59.296489    1286 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b549c674-ee70-4711-a6ff-1deafa8d86a5-gcp-creds\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:54:59 addons-961288 kubelet[1286]: I1008 21:54:59.985759    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43353b2a0e890180f269014890b7cd5bbc5b22637088c15e3357a32189041c57"
	Oct 08 21:55:01 addons-961288 kubelet[1286]: I1008 21:55:01.122322    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4288m\" (UniqueName: \"kubernetes.io/projected/3df08afa-b177-4167-930b-b39fb437b0e2-kube-api-access-4288m\") pod \"helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") " pod="local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963"
	Oct 08 21:55:01 addons-961288 kubelet[1286]: I1008 21:55:01.122388    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-gcp-creds\") pod \"helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") " pod="local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963"
	Oct 08 21:55:01 addons-961288 kubelet[1286]: I1008 21:55:01.122570    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3df08afa-b177-4167-930b-b39fb437b0e2-script\") pod \"helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") " pod="local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963"
	Oct 08 21:55:01 addons-961288 kubelet[1286]: I1008 21:55:01.122596    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-data\") pod \"helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") " pod="local-path-storage/helper-pod-delete-pvc-8e4ef856-8168-49ac-bec5-fd30ac333963"
	Oct 08 21:55:01 addons-961288 kubelet[1286]: W1008 21:55:01.631591    1286 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d45eb870dafc6be09f6166aab30dfc34f951a203787fdf1b95e1695d4f9c44be/crio-dd4d8d68df92ebb75f5d8f6402c5f5fd606c9f3b950240d051c5e30a539da3b4 WatchSource:0}: Error finding container dd4d8d68df92ebb75f5d8f6402c5f5fd606c9f3b950240d051c5e30a539da3b4: Status 404 returned error can't find the container with id dd4d8d68df92ebb75f5d8f6402c5f5fd606c9f3b950240d051c5e30a539da3b4
	Oct 08 21:55:01 addons-961288 kubelet[1286]: I1008 21:55:01.915512    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b549c674-ee70-4711-a6ff-1deafa8d86a5" path="/var/lib/kubelet/pods/b549c674-ee70-4711-a6ff-1deafa8d86a5/volumes"
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.165149    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-data\") pod \"3df08afa-b177-4167-930b-b39fb437b0e2\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") "
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.165232    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3df08afa-b177-4167-930b-b39fb437b0e2-script\") pod \"3df08afa-b177-4167-930b-b39fb437b0e2\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") "
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.165267    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-gcp-creds\") pod \"3df08afa-b177-4167-930b-b39fb437b0e2\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") "
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.165338    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4288m\" (UniqueName: \"kubernetes.io/projected/3df08afa-b177-4167-930b-b39fb437b0e2-kube-api-access-4288m\") pod \"3df08afa-b177-4167-930b-b39fb437b0e2\" (UID: \"3df08afa-b177-4167-930b-b39fb437b0e2\") "
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.166225    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3df08afa-b177-4167-930b-b39fb437b0e2-script" (OuterVolumeSpecName: "script") pod "3df08afa-b177-4167-930b-b39fb437b0e2" (UID: "3df08afa-b177-4167-930b-b39fb437b0e2"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.166292    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-data" (OuterVolumeSpecName: "data") pod "3df08afa-b177-4167-930b-b39fb437b0e2" (UID: "3df08afa-b177-4167-930b-b39fb437b0e2"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.166316    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "3df08afa-b177-4167-930b-b39fb437b0e2" (UID: "3df08afa-b177-4167-930b-b39fb437b0e2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.172959    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df08afa-b177-4167-930b-b39fb437b0e2-kube-api-access-4288m" (OuterVolumeSpecName: "kube-api-access-4288m") pod "3df08afa-b177-4167-930b-b39fb437b0e2" (UID: "3df08afa-b177-4167-930b-b39fb437b0e2"). InnerVolumeSpecName "kube-api-access-4288m". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.268560    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4288m\" (UniqueName: \"kubernetes.io/projected/3df08afa-b177-4167-930b-b39fb437b0e2-kube-api-access-4288m\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.268606    1286 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-data\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.268616    1286 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3df08afa-b177-4167-930b-b39fb437b0e2-script\") on node \"addons-961288\" DevicePath \"\""
	Oct 08 21:55:03 addons-961288 kubelet[1286]: I1008 21:55:03.268626    1286 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3df08afa-b177-4167-930b-b39fb437b0e2-gcp-creds\") on node \"addons-961288\" DevicePath \"\""
	
	
	==> storage-provisioner [d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2] <==
	W1008 21:54:38.965544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:40.968812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:40.975801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:42.978653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:42.983130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:44.987019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:44.991333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:46.994118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:46.998564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:49.002628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:49.008269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:51.011994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:51.017852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:53.022008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:53.036099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:55.055103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:55.081932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:57.093207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:57.098228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:59.101650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:54:59.106384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:55:01.112115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:55:01.120450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:55:03.128550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 21:55:03.134840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-961288 -n addons-961288
helpers_test.go:269: (dbg) Run:  kubectl --context addons-961288 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj registry-creds-764b6fb674-jqkzb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-961288 describe pod ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj registry-creds-764b6fb674-jqkzb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-961288 describe pod ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj registry-creds-764b6fb674-jqkzb: exit status 1 (80.998017ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9d26x" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kp9qj" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-jqkzb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-961288 describe pod ingress-nginx-admission-create-9d26x ingress-nginx-admission-patch-kp9qj registry-creds-764b6fb674-jqkzb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable headlamp --alsologtostderr -v=1: exit status 11 (315.902593ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:04.342253   12398 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:04.342559   12398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:04.342595   12398 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:04.342618   12398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:04.342903   12398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:04.343261   12398 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:04.343671   12398 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:04.343712   12398 addons.go:606] checking whether the cluster is paused
	I1008 21:55:04.343850   12398 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:04.343890   12398 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:04.344365   12398 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:04.365538   12398 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:04.365592   12398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:04.386063   12398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:04.493174   12398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:04.493255   12398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:04.547998   12398 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:04.548020   12398 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:04.548024   12398 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:04.548028   12398 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:04.548031   12398 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:04.548036   12398 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:04.548039   12398 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:04.548042   12398 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:04.548045   12398 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:04.548052   12398 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:04.548056   12398 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:04.548059   12398 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:04.548062   12398 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:04.548065   12398 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:04.548068   12398 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:04.548076   12398 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:04.548080   12398 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:04.548084   12398 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:04.548088   12398 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:04.548090   12398 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:04.548095   12398 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:04.548098   12398 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:04.548101   12398 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:04.548104   12398 cri.go:89] found id: ""
	I1008 21:55:04.548165   12398 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:04.571186   12398 out.go:203] 
	W1008 21:55:04.573939   12398 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:04.573970   12398 out.go:285] * 
	* 
	W1008 21:55:04.578475   12398 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:04.581908   12398 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-46cw7" [ee1e2d69-94e2-4c4b-a56e-fe36a8422e7f] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009424781s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (333.541136ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:00.590437   11777 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:00.590663   11777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:00.590677   11777 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:00.590682   11777 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:00.591020   11777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:00.591352   11777 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:00.591780   11777 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:00.591803   11777 addons.go:606] checking whether the cluster is paused
	I1008 21:55:00.591957   11777 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:00.591983   11777 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:00.592801   11777 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:00.620735   11777 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:00.620802   11777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:00.644499   11777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:00.764699   11777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:00.764789   11777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:00.810567   11777 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:00.810586   11777 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:00.810591   11777 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:00.810595   11777 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:00.810599   11777 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:00.810603   11777 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:00.810606   11777 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:00.810609   11777 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:00.810612   11777 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:00.810622   11777 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:00.810626   11777 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:00.810629   11777 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:00.810632   11777 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:00.810635   11777 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:00.810639   11777 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:00.810646   11777 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:00.810650   11777 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:00.810654   11777 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:00.810658   11777 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:00.810661   11777 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:00.810666   11777 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:00.810669   11777 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:00.810672   11777 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:00.810675   11777 cri.go:89] found id: ""
	I1008 21:55:00.810727   11777 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:00.834214   11777 out.go:203] 
	W1008 21:55:00.837228   11777 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:00.837311   11777 out.go:285] * 
	* 
	W1008 21:55:00.843073   11777 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:00.846935   11777 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-961288 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-961288 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-961288 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b549c674-ee70-4711-a6ff-1deafa8d86a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b549c674-ee70-4711-a6ff-1deafa8d86a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b549c674-ee70-4711-a6ff-1deafa8d86a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003337731s
addons_test.go:967: (dbg) Run:  kubectl --context addons-961288 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 ssh "cat /opt/local-path-provisioner/pvc-8e4ef856-8168-49ac-bec5-fd30ac333963_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-961288 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-961288 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (355.569165ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:55:01.046513   11847 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:55:01.046667   11847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:01.046679   11847 out.go:374] Setting ErrFile to fd 2...
	I1008 21:55:01.046685   11847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:55:01.046952   11847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:55:01.047229   11847 mustload.go:65] Loading cluster: addons-961288
	I1008 21:55:01.047594   11847 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:01.047611   11847 addons.go:606] checking whether the cluster is paused
	I1008 21:55:01.047713   11847 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:55:01.047736   11847 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:55:01.048190   11847 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:55:01.067018   11847 ssh_runner.go:195] Run: systemctl --version
	I1008 21:55:01.067073   11847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:55:01.091066   11847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:55:01.209933   11847 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:55:01.210035   11847 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:55:01.282713   11847 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:55:01.282737   11847 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:55:01.282744   11847 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:55:01.282749   11847 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:55:01.282754   11847 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:55:01.282759   11847 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:55:01.282775   11847 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:55:01.282780   11847 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:55:01.282783   11847 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:55:01.282790   11847 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:55:01.282793   11847 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:55:01.282797   11847 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:55:01.282800   11847 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:55:01.282803   11847 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:55:01.282806   11847 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:55:01.282816   11847 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:55:01.282820   11847 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:55:01.282824   11847 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:55:01.282827   11847 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:55:01.282831   11847 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:55:01.282836   11847 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:55:01.282839   11847 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:55:01.282842   11847 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:55:01.282846   11847 cri.go:89] found id: ""
	I1008 21:55:01.282906   11847 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:55:01.301056   11847 out.go:203] 
	W1008 21:55:01.303953   11847 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:55:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:55:01.303977   11847 out.go:285] * 
	* 
	W1008 21:55:01.308634   11847 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:55:01.311688   11847 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.92s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fsrx4" [a3f70c68-9e64-4747-8c87-1443b583919f] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003495427s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (246.965575ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:54:51.202084   11360 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:54:51.202325   11360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:51.202357   11360 out.go:374] Setting ErrFile to fd 2...
	I1008 21:54:51.202378   11360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:51.202729   11360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:54:51.203049   11360 mustload.go:65] Loading cluster: addons-961288
	I1008 21:54:51.203447   11360 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:51.203489   11360 addons.go:606] checking whether the cluster is paused
	I1008 21:54:51.203618   11360 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:51.203657   11360 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:54:51.204109   11360 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:54:51.220982   11360 ssh_runner.go:195] Run: systemctl --version
	I1008 21:54:51.221046   11360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:54:51.239878   11360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:54:51.344234   11360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:54:51.344311   11360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:54:51.372526   11360 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:54:51.372588   11360 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:54:51.372607   11360 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:54:51.372631   11360 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:54:51.372665   11360 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:54:51.372688   11360 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:54:51.372708   11360 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:54:51.372731   11360 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:54:51.372764   11360 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:54:51.372790   11360 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:54:51.372809   11360 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:54:51.372832   11360 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:54:51.372865   11360 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:54:51.372889   11360 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:54:51.372911   11360 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:54:51.372935   11360 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:54:51.372976   11360 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:54:51.373003   11360 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:54:51.373025   11360 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:54:51.373048   11360 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:54:51.373086   11360 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:54:51.373109   11360 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:54:51.373127   11360 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:54:51.373149   11360 cri.go:89] found id: ""
	I1008 21:54:51.373231   11360 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:54:51.387785   11360 out.go:203] 
	W1008 21:54:51.390785   11360 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:54:51.390807   11360 out.go:285] * 
	* 
	W1008 21:54:51.395057   11360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:54:51.398121   11360 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vhcqh" [c1c7afdf-a48c-46eb-85aa-10ef9c706c31] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004008124s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-961288 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-961288 addons disable yakd --alsologtostderr -v=1: exit status 11 (249.305659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 21:54:45.946588   11268 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:54:45.946831   11268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:45.946861   11268 out.go:374] Setting ErrFile to fd 2...
	I1008 21:54:45.946886   11268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:54:45.947303   11268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:54:45.948201   11268 mustload.go:65] Loading cluster: addons-961288
	I1008 21:54:45.948705   11268 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:45.948759   11268 addons.go:606] checking whether the cluster is paused
	I1008 21:54:45.948931   11268 config.go:182] Loaded profile config "addons-961288": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 21:54:45.948984   11268 host.go:66] Checking if "addons-961288" exists ...
	I1008 21:54:45.949693   11268 cli_runner.go:164] Run: docker container inspect addons-961288 --format={{.State.Status}}
	I1008 21:54:45.966393   11268 ssh_runner.go:195] Run: systemctl --version
	I1008 21:54:45.966443   11268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-961288
	I1008 21:54:45.984885   11268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/addons-961288/id_rsa Username:docker}
	I1008 21:54:46.092103   11268 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 21:54:46.092198   11268 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 21:54:46.121673   11268 cri.go:89] found id: "1b176619cba2b927583b3a561af8517afac25a6b5f93cd3782d4fd78c1821797"
	I1008 21:54:46.121698   11268 cri.go:89] found id: "6914889d561d2c36dff931433277ec9d81899c82f12d21eaf14a09e0cdcdeabd"
	I1008 21:54:46.121706   11268 cri.go:89] found id: "04cded645e0f5f5a76bba75c0adceca9f8fcfa74d1c68df0baee3299b027aed8"
	I1008 21:54:46.121711   11268 cri.go:89] found id: "53f8bbdff2a616b7345192f0eeb1f8df78e19d727da5eb2df9720cba626d6731"
	I1008 21:54:46.121715   11268 cri.go:89] found id: "9e0cfc150cb8bc1c1f8f07a509bf0b03342f2025faf26d4fd8a1b00b85300af2"
	I1008 21:54:46.121719   11268 cri.go:89] found id: "2ee4ab9224d4e17eb18e0c697addb9a1e3e433d4982c82ca6abc756556e63856"
	I1008 21:54:46.121722   11268 cri.go:89] found id: "7288cfd0676503ca9aa146f24e6a58bd3932865f9a20362cf4508cba496e1a3c"
	I1008 21:54:46.121726   11268 cri.go:89] found id: "83d5f8807dd5a027d830b94da1c21140ac4ee0bf1f86cc7017b3c0e0b453b10e"
	I1008 21:54:46.121729   11268 cri.go:89] found id: "d1380cc21067ab0f3b0963c32b79029982cdd1db8fe69794e577c7e15f9fd306"
	I1008 21:54:46.121744   11268 cri.go:89] found id: "ff8f96680aca478b4aa6e0037111c3c21b1f55fe73af45266adf7e0f09de7d3e"
	I1008 21:54:46.121750   11268 cri.go:89] found id: "39cf7b8150b29c04cbfc45c59258c66c80aca22ece2100c1b72a981a93e3a540"
	I1008 21:54:46.121754   11268 cri.go:89] found id: "e989c71cd7b8b07b333ffeb7ef522006615e74159854b7446efdb26e4fa1dc40"
	I1008 21:54:46.121760   11268 cri.go:89] found id: "dad9a565111fec66ed938f12a4a65ec1a6f77036965bdb5b71b1b49d1dfac9f8"
	I1008 21:54:46.121764   11268 cri.go:89] found id: "a1add4f38e67c6a35747ce7aa6ff1fdac102feb208001fafc877786678aa5297"
	I1008 21:54:46.121767   11268 cri.go:89] found id: "b6beebcffc7ee4ebe3df0d69b536fdae92dce66caa5cba9edb30a43b6e6a0c98"
	I1008 21:54:46.121773   11268 cri.go:89] found id: "d80e9870694806ccf871cf9834de3bb65366272f9fc7601cc8739f969cdc3ab2"
	I1008 21:54:46.121780   11268 cri.go:89] found id: "d8507d936e30a88a76ef6583b070a91958e0e1c4b86da5b8df6e15324c84b2a4"
	I1008 21:54:46.121785   11268 cri.go:89] found id: "3d83973804a8cf95cd8c318ec07cf258fc2f76426a271ba716d43d6cd70848f6"
	I1008 21:54:46.121788   11268 cri.go:89] found id: "02c59261c1cab82f526d80cd85056f40b724cc50c23d93ed87cad88e078709dd"
	I1008 21:54:46.121791   11268 cri.go:89] found id: "12f7556456c3bad3aeab9a224dfd842142a18e55b8ed09e7f3c29dc112a1916b"
	I1008 21:54:46.121797   11268 cri.go:89] found id: "c21bc28053396f6c7479e50ef2386524180a911ad6f59e68e5471bd841bb534c"
	I1008 21:54:46.121800   11268 cri.go:89] found id: "6a475d38a34a25e21ba9c4c61cc248d84c7411beb0afd90135b116ca4a71e233"
	I1008 21:54:46.121803   11268 cri.go:89] found id: "a2d50687425bc93c34514dccaee68623d8763dd8851394180c2fe91f57403235"
	I1008 21:54:46.121806   11268 cri.go:89] found id: ""
	I1008 21:54:46.121876   11268 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 21:54:46.136681   11268 out.go:203] 
	W1008 21:54:46.139466   11268 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:54:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 21:54:46.139497   11268 out.go:285] * 
	* 
	W1008 21:54:46.143717   11268 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 21:54:46.146592   11268 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-961288 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestForceSystemdFlag (513.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1008 22:51:38.633294    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m28.566964475s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-385382] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-385382" primary control-plane node in "force-systemd-flag-385382" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:49:48.712236  171796 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:49:48.712360  171796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:49:48.712370  171796 out.go:374] Setting ErrFile to fd 2...
	I1008 22:49:48.712375  171796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:49:48.712735  171796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:49:48.713214  171796 out.go:368] Setting JSON to false
	I1008 22:49:48.714182  171796 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5539,"bootTime":1759958250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:49:48.714277  171796 start.go:141] virtualization:  
	I1008 22:49:48.717926  171796 out.go:179] * [force-systemd-flag-385382] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:49:48.722658  171796 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:49:48.722716  171796 notify.go:220] Checking for updates...
	I1008 22:49:48.726226  171796 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:49:48.729723  171796 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:49:48.732977  171796 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:49:48.736143  171796 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:49:48.739393  171796 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:49:48.743015  171796 config.go:182] Loaded profile config "force-systemd-env-092546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:49:48.743143  171796 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:49:48.767594  171796 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:49:48.767735  171796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:49:48.828490  171796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:49:48.815021378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:49:48.828610  171796 docker.go:318] overlay module found
	I1008 22:49:48.831820  171796 out.go:179] * Using the docker driver based on user configuration
	I1008 22:49:48.834758  171796 start.go:305] selected driver: docker
	I1008 22:49:48.834777  171796 start.go:925] validating driver "docker" against <nil>
	I1008 22:49:48.834792  171796 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:49:48.835515  171796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:49:48.891047  171796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:49:48.881925666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:49:48.891198  171796 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:49:48.891435  171796 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 22:49:48.894608  171796 out.go:179] * Using Docker driver with root privileges
	I1008 22:49:48.897546  171796 cni.go:84] Creating CNI manager for ""
	I1008 22:49:48.897620  171796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:49:48.897742  171796 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:49:48.897822  171796 start.go:349] cluster config:
	{Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:49:48.900938  171796 out.go:179] * Starting "force-systemd-flag-385382" primary control-plane node in "force-systemd-flag-385382" cluster
	I1008 22:49:48.903830  171796 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:49:48.906802  171796 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:49:48.909650  171796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:49:48.909682  171796 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:49:48.909703  171796 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 22:49:48.909725  171796 cache.go:58] Caching tarball of preloaded images
	I1008 22:49:48.909808  171796 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:49:48.909817  171796 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 22:49:48.909924  171796 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/config.json ...
	I1008 22:49:48.909953  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/config.json: {Name:mk4724c7d82e25ae3bc0667fb81e54635c623861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:48.929803  171796 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:49:48.929828  171796 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:49:48.929853  171796 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:49:48.929875  171796 start.go:360] acquireMachinesLock for force-systemd-flag-385382: {Name:mk7c40943b856235fde6dc84ba727699096ce250 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:49:48.929976  171796 start.go:364] duration metric: took 83.382µs to acquireMachinesLock for "force-systemd-flag-385382"
	I1008 22:49:48.930007  171796 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:49:48.930076  171796 start.go:125] createHost starting for "" (driver="docker")
	I1008 22:49:48.933700  171796 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:49:48.933965  171796 start.go:159] libmachine.API.Create for "force-systemd-flag-385382" (driver="docker")
	I1008 22:49:48.934014  171796 client.go:168] LocalClient.Create starting
	I1008 22:49:48.934104  171796 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:49:48.934150  171796 main.go:141] libmachine: Decoding PEM data...
	I1008 22:49:48.934169  171796 main.go:141] libmachine: Parsing certificate...
	I1008 22:49:48.934231  171796 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:49:48.934253  171796 main.go:141] libmachine: Decoding PEM data...
	I1008 22:49:48.934263  171796 main.go:141] libmachine: Parsing certificate...
	I1008 22:49:48.934642  171796 cli_runner.go:164] Run: docker network inspect force-systemd-flag-385382 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:49:48.951463  171796 cli_runner.go:211] docker network inspect force-systemd-flag-385382 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:49:48.951556  171796 network_create.go:284] running [docker network inspect force-systemd-flag-385382] to gather additional debugging logs...
	I1008 22:49:48.951580  171796 cli_runner.go:164] Run: docker network inspect force-systemd-flag-385382
	W1008 22:49:48.967413  171796 cli_runner.go:211] docker network inspect force-systemd-flag-385382 returned with exit code 1
	I1008 22:49:48.967452  171796 network_create.go:287] error running [docker network inspect force-systemd-flag-385382]: docker network inspect force-systemd-flag-385382: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-385382 not found
	I1008 22:49:48.967467  171796 network_create.go:289] output of [docker network inspect force-systemd-flag-385382]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-385382 not found
	
	** /stderr **
	I1008 22:49:48.967580  171796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:49:48.984561  171796 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:49:48.984890  171796 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:49:48.985131  171796 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:49:48.985558  171796 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a11a40}
	I1008 22:49:48.985583  171796 network_create.go:124] attempt to create docker network force-systemd-flag-385382 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1008 22:49:48.985751  171796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-385382 force-systemd-flag-385382
	I1008 22:49:49.048324  171796 network_create.go:108] docker network force-systemd-flag-385382 192.168.76.0/24 created
	I1008 22:49:49.048359  171796 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-385382" container
	I1008 22:49:49.048446  171796 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:49:49.065296  171796 cli_runner.go:164] Run: docker volume create force-systemd-flag-385382 --label name.minikube.sigs.k8s.io=force-systemd-flag-385382 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:49:49.084256  171796 oci.go:103] Successfully created a docker volume force-systemd-flag-385382
	I1008 22:49:49.084338  171796 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-385382-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-385382 --entrypoint /usr/bin/test -v force-systemd-flag-385382:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:49:49.627212  171796 oci.go:107] Successfully prepared a docker volume force-systemd-flag-385382
	I1008 22:49:49.627269  171796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:49:49.627288  171796 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:49:49.627358  171796 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-385382:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:49:54.086598  171796 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-385382:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.459181997s)
	I1008 22:49:54.086634  171796 kic.go:203] duration metric: took 4.459341777s to extract preloaded images to volume ...
	W1008 22:49:54.086771  171796 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:49:54.086892  171796 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:49:54.145908  171796 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-385382 --name force-systemd-flag-385382 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-385382 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-385382 --network force-systemd-flag-385382 --ip 192.168.76.2 --volume force-systemd-flag-385382:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:49:54.439406  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Running}}
	I1008 22:49:54.464206  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Status}}
	I1008 22:49:54.490208  171796 cli_runner.go:164] Run: docker exec force-systemd-flag-385382 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:49:54.554845  171796 oci.go:144] the created container "force-systemd-flag-385382" has a running status.
	I1008 22:49:54.554885  171796 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa...
	I1008 22:49:55.865931  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 22:49:55.865984  171796 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:49:55.886042  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Status}}
	I1008 22:49:55.902706  171796 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:49:55.902730  171796 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-385382 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:49:55.944885  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Status}}
	I1008 22:49:55.966739  171796 machine.go:93] provisionDockerMachine start ...
	I1008 22:49:55.966846  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:55.985010  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:55.985339  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:55.985358  171796 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:49:56.137775  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-385382
	
	I1008 22:49:56.137800  171796 ubuntu.go:182] provisioning hostname "force-systemd-flag-385382"
	I1008 22:49:56.137894  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:56.156414  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:56.156740  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:56.156759  171796 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-385382 && echo "force-systemd-flag-385382" | sudo tee /etc/hostname
	I1008 22:49:56.310909  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-385382
	
	I1008 22:49:56.311001  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:56.329519  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:56.329868  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:56.329893  171796 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-385382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-385382/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-385382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:49:56.477916  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:49:56.477948  171796 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:49:56.477976  171796 ubuntu.go:190] setting up certificates
	I1008 22:49:56.477985  171796 provision.go:84] configureAuth start
	I1008 22:49:56.478059  171796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-385382
	I1008 22:49:56.496395  171796 provision.go:143] copyHostCerts
	I1008 22:49:56.496436  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:49:56.496467  171796 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:49:56.496479  171796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:49:56.496555  171796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:49:56.496642  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:49:56.496670  171796 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:49:56.496681  171796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:49:56.496711  171796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:49:56.496770  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:49:56.496793  171796 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:49:56.496801  171796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:49:56.496831  171796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:49:56.496895  171796 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-385382 san=[127.0.0.1 192.168.76.2 force-systemd-flag-385382 localhost minikube]
	I1008 22:49:56.857282  171796 provision.go:177] copyRemoteCerts
	I1008 22:49:56.857356  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:49:56.857411  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:56.874832  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:56.977398  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 22:49:56.977465  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:49:56.995163  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 22:49:56.995230  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:49:57.015746  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 22:49:57.015812  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1008 22:49:57.034292  171796 provision.go:87] duration metric: took 556.288441ms to configureAuth
	I1008 22:49:57.034330  171796 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:49:57.034533  171796 config.go:182] Loaded profile config "force-systemd-flag-385382": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:49:57.034643  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.051931  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:57.052255  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:57.052279  171796 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:49:57.318268  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:49:57.318294  171796 machine.go:96] duration metric: took 1.351533543s to provisionDockerMachine
	I1008 22:49:57.318304  171796 client.go:171] duration metric: took 8.384278375s to LocalClient.Create
	I1008 22:49:57.318339  171796 start.go:167] duration metric: took 8.384378922s to libmachine.API.Create "force-systemd-flag-385382"
	I1008 22:49:57.318363  171796 start.go:293] postStartSetup for "force-systemd-flag-385382" (driver="docker")
	I1008 22:49:57.318378  171796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:49:57.318477  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:49:57.318535  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.337941  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.442062  171796 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:49:57.446156  171796 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:49:57.446189  171796 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:49:57.446203  171796 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:49:57.446261  171796 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:49:57.446352  171796 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:49:57.446365  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> /etc/ssl/certs/42862.pem
	I1008 22:49:57.446468  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:49:57.454690  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:49:57.473501  171796 start.go:296] duration metric: took 155.118367ms for postStartSetup
	I1008 22:49:57.473992  171796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-385382
	I1008 22:49:57.490786  171796 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/config.json ...
	I1008 22:49:57.491079  171796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:49:57.491139  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.508070  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.606626  171796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:49:57.611673  171796 start.go:128] duration metric: took 8.681580536s to createHost
	I1008 22:49:57.611700  171796 start.go:83] releasing machines lock for "force-systemd-flag-385382", held for 8.681709587s
	I1008 22:49:57.611774  171796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-385382
	I1008 22:49:57.628772  171796 ssh_runner.go:195] Run: cat /version.json
	I1008 22:49:57.628804  171796 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:49:57.628824  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.628879  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.645613  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.648323  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.833370  171796 ssh_runner.go:195] Run: systemctl --version
	I1008 22:49:57.840089  171796 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:49:57.880249  171796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:49:57.884381  171796 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:49:57.884456  171796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:49:57.913403  171796 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:49:57.913428  171796 start.go:495] detecting cgroup driver to use...
	I1008 22:49:57.913441  171796 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1008 22:49:57.913498  171796 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:49:57.930576  171796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:49:57.944205  171796 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:49:57.944302  171796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:49:57.962011  171796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:49:57.980810  171796 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:49:58.104608  171796 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:49:58.222664  171796 docker.go:234] disabling docker service ...
	I1008 22:49:58.222747  171796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:49:58.248972  171796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:49:58.262862  171796 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:49:58.389930  171796 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:49:58.512350  171796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:49:58.525595  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:49:58.540156  171796 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:49:58.540242  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.549119  171796 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 22:49:58.549216  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.558308  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.567398  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.575922  171796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:49:58.584034  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.592779  171796 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.606007  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.614900  171796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:49:58.622481  171796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:49:58.630378  171796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:49:58.737671  171796 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:49:58.862828  171796 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:49:58.862948  171796 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:49:58.866782  171796 start.go:563] Will wait 60s for crictl version
	I1008 22:49:58.866887  171796 ssh_runner.go:195] Run: which crictl
	I1008 22:49:58.870510  171796 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:49:58.899067  171796 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:49:58.899160  171796 ssh_runner.go:195] Run: crio --version
	I1008 22:49:58.926179  171796 ssh_runner.go:195] Run: crio --version
	I1008 22:49:58.960319  171796 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:49:58.963490  171796 cli_runner.go:164] Run: docker network inspect force-systemd-flag-385382 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:49:58.980189  171796 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 22:49:58.984052  171796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:49:58.993800  171796 kubeadm.go:883] updating cluster {Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:49:58.993916  171796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:49:58.993971  171796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:49:59.025951  171796 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:49:59.025975  171796 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:49:59.026040  171796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:49:59.054812  171796 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:49:59.054841  171796 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:49:59.054853  171796 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 22:49:59.054956  171796 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-385382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:49:59.055041  171796 ssh_runner.go:195] Run: crio config
	I1008 22:49:59.129510  171796 cni.go:84] Creating CNI manager for ""
	I1008 22:49:59.129535  171796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:49:59.129553  171796 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:49:59.129576  171796 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-385382 NodeName:force-systemd-flag-385382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:49:59.129729  171796 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-385382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:49:59.129802  171796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:49:59.138084  171796 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:49:59.138200  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:49:59.146009  171796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1008 22:49:59.161133  171796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:49:59.178605  171796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1008 22:49:59.196778  171796 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:49:59.203550  171796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:49:59.213577  171796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:49:59.334736  171796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:49:59.350400  171796 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382 for IP: 192.168.76.2
	I1008 22:49:59.350422  171796 certs.go:195] generating shared ca certs ...
	I1008 22:49:59.350439  171796 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:59.350574  171796 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:49:59.350623  171796 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:49:59.350635  171796 certs.go:257] generating profile certs ...
	I1008 22:49:59.350690  171796 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.key
	I1008 22:49:59.350717  171796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.crt with IP's: []
	I1008 22:49:59.477834  171796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.crt ...
	I1008 22:49:59.477863  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.crt: {Name:mkcee2ee18d6ccbe255790a5d8793754f69334e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:59.478073  171796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.key ...
	I1008 22:49:59.478090  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.key: {Name:mk3fd32a08b0d274ece9c9af9af1e7c02122a456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:59.478192  171796 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413
	I1008 22:49:59.478211  171796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1008 22:50:00.088051  171796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413 ...
	I1008 22:50:00.089713  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413: {Name:mk3c66a6004f657b3c1cd121f299b346cab07d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.090032  171796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413 ...
	I1008 22:50:00.101721  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413: {Name:mk6f6444b6ad2a5850f0a82bf2bbb1ad506b7704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.102041  171796 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt
	I1008 22:50:00.102218  171796 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key
	I1008 22:50:00.102350  171796 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key
	I1008 22:50:00.102393  171796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt with IP's: []
	I1008 22:50:00.963716  171796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt ...
	I1008 22:50:00.963798  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt: {Name:mkca240e3833bd193a08b4d38da29c0b6b39a649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.964057  171796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key ...
	I1008 22:50:00.964075  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key: {Name:mk09568b6f55406b366b608c84e95860ac10c91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.964162  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 22:50:00.964189  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 22:50:00.964202  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 22:50:00.964221  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 22:50:00.964233  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 22:50:00.964248  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 22:50:00.964260  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 22:50:00.964276  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 22:50:00.964328  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:50:00.964381  171796 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:50:00.964399  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:50:00.964449  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:50:00.964478  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:50:00.964503  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:50:00.964550  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:50:00.964586  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:00.964598  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem -> /usr/share/ca-certificates/4286.pem
	I1008 22:50:00.964610  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> /usr/share/ca-certificates/42862.pem
	I1008 22:50:00.965216  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:50:00.984020  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:50:01.003580  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:50:01.024793  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:50:01.044740  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 22:50:01.065284  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:50:01.085980  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:50:01.110682  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 22:50:01.139512  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:50:01.165263  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:50:01.196172  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:50:01.216908  171796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:50:01.233389  171796 ssh_runner.go:195] Run: openssl version
	I1008 22:50:01.240201  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:50:01.249719  171796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:01.253984  171796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:01.254047  171796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:01.296207  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:50:01.305436  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:50:01.314700  171796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:50:01.318757  171796 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:50:01.318847  171796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:50:01.361711  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:50:01.371384  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:50:01.380490  171796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:50:01.384896  171796 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:50:01.384963  171796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:50:01.427361  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:50:01.436426  171796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:50:01.440285  171796 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:50:01.440372  171796 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:50:01.440463  171796 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:50:01.440528  171796 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:50:01.468795  171796 cri.go:89] found id: ""
	I1008 22:50:01.468866  171796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:50:01.477392  171796 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:50:01.485949  171796 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:50:01.486047  171796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:50:01.494670  171796 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:50:01.494689  171796 kubeadm.go:157] found existing configuration files:
	
	I1008 22:50:01.494744  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:50:01.502944  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:50:01.503031  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:50:01.510785  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:50:01.518858  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:50:01.518932  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:50:01.526551  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:50:01.535053  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:50:01.535165  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:50:01.543489  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:50:01.551669  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:50:01.551746  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:50:01.560118  171796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:50:01.609923  171796 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:50:01.610312  171796 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:50:01.635874  171796 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:50:01.635957  171796 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:50:01.636002  171796 kubeadm.go:318] OS: Linux
	I1008 22:50:01.636057  171796 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:50:01.636113  171796 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:50:01.636167  171796 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:50:01.636221  171796 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:50:01.636276  171796 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:50:01.636331  171796 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:50:01.636383  171796 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:50:01.636438  171796 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:50:01.636490  171796 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:50:01.713117  171796 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:50:01.713241  171796 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:50:01.713343  171796 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:50:01.726078  171796 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:50:01.732747  171796 out.go:252]   - Generating certificates and keys ...
	I1008 22:50:01.732941  171796 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:50:01.733076  171796 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:50:01.958105  171796 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:50:02.572421  171796 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:50:02.821561  171796 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:50:03.137989  171796 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:50:03.464563  171796 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:50:03.464842  171796 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:50:04.024680  171796 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:50:04.024845  171796 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:50:05.561249  171796 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:50:05.876259  171796 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:50:06.657160  171796 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:50:06.657460  171796 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:50:06.897006  171796 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:50:07.385522  171796 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:50:07.552829  171796 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:50:07.640297  171796 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:50:09.059816  171796 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:50:09.060397  171796 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:50:09.063837  171796 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:50:09.067495  171796 out.go:252]   - Booting up control plane ...
	I1008 22:50:09.067610  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:50:09.067974  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:50:09.068745  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:50:09.085807  171796 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:50:09.086154  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:50:09.094498  171796 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:50:09.094867  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:50:09.094925  171796 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:50:09.236248  171796 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:50:09.236373  171796 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:50:11.240868  171796 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000927309s
	I1008 22:50:11.240982  171796 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:50:11.241077  171796 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1008 22:50:11.241182  171796 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:50:11.241267  171796 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:54:11.241650  171796 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001021269s
	I1008 22:54:11.242003  171796 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001183494s
	I1008 22:54:11.242429  171796 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001554534s
	I1008 22:54:11.242483  171796 kubeadm.go:318] 
	I1008 22:54:11.242576  171796 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:54:11.242689  171796 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:54:11.242785  171796 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:54:11.242913  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:54:11.242993  171796 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:54:11.243075  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:54:11.243083  171796 kubeadm.go:318] 
	I1008 22:54:11.248122  171796 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:54:11.248367  171796 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:54:11.248483  171796 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:54:11.249088  171796 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 22:54:11.249165  171796 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1008 22:54:11.249321  171796 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000927309s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001021269s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001183494s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001554534s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000927309s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001021269s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001183494s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001554534s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 22:54:11.249403  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 22:54:11.799431  171796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:54:11.813202  171796 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:54:11.813258  171796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:54:11.821436  171796 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:54:11.821456  171796 kubeadm.go:157] found existing configuration files:
	
	I1008 22:54:11.821507  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:54:11.829391  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:54:11.829500  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:54:11.836959  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:54:11.845137  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:54:11.845208  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:54:11.852823  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:54:11.861211  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:54:11.861296  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:54:11.868785  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:54:11.876745  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:54:11.876856  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:54:11.884940  171796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:54:11.924932  171796 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:54:11.925194  171796 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:54:11.951034  171796 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:54:11.951108  171796 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:54:11.951145  171796 kubeadm.go:318] OS: Linux
	I1008 22:54:11.951193  171796 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:54:11.951243  171796 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:54:11.951293  171796 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:54:11.951343  171796 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:54:11.951394  171796 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:54:11.951449  171796 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:54:11.951497  171796 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:54:11.951548  171796 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:54:11.951596  171796 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:54:12.036161  171796 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:54:12.036270  171796 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:54:12.036361  171796 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:54:12.054076  171796 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:54:12.061736  171796 out.go:252]   - Generating certificates and keys ...
	I1008 22:54:12.061833  171796 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:54:12.061898  171796 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:54:12.061975  171796 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 22:54:12.062036  171796 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 22:54:12.062106  171796 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 22:54:12.062160  171796 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 22:54:12.062224  171796 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 22:54:12.062285  171796 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 22:54:12.062359  171796 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 22:54:12.062432  171796 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 22:54:12.062470  171796 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 22:54:12.062534  171796 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:54:12.625613  171796 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:54:12.866049  171796 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:54:13.055455  171796 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:54:14.357749  171796 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:54:14.949936  171796 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:54:14.950545  171796 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:54:14.953816  171796 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:54:14.957439  171796 out.go:252]   - Booting up control plane ...
	I1008 22:54:14.957577  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:54:14.957677  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:54:14.958834  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:54:14.977498  171796 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:54:14.978141  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:54:14.986378  171796 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:54:14.986689  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:54:14.986746  171796 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:54:15.155337  171796 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:54:15.155469  171796 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:54:16.160644  171796 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.005376143s
	I1008 22:54:16.165475  171796 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:54:16.165584  171796 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1008 22:54:16.165708  171796 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:54:16.165790  171796 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:58:16.165730  171796 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	I1008 22:58:16.166225  171796 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	I1008 22:58:16.166324  171796 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	I1008 22:58:16.166332  171796 kubeadm.go:318] 
	I1008 22:58:16.166422  171796 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:58:16.166504  171796 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:58:16.166591  171796 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:58:16.167679  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:58:16.167775  171796 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:58:16.168263  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:58:16.168455  171796 kubeadm.go:318] 
	I1008 22:58:16.172618  171796 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:58:16.172842  171796 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:58:16.172947  171796 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:58:16.173497  171796 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 22:58:16.173565  171796 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 22:58:16.173617  171796 kubeadm.go:402] duration metric: took 8m14.733249742s to StartCluster
	I1008 22:58:16.173680  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 22:58:16.173740  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 22:58:16.201140  171796 cri.go:89] found id: ""
	I1008 22:58:16.201170  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.201184  171796 logs.go:284] No container was found matching "kube-apiserver"
	I1008 22:58:16.201191  171796 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 22:58:16.201248  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 22:58:16.234251  171796 cri.go:89] found id: ""
	I1008 22:58:16.234272  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.234280  171796 logs.go:284] No container was found matching "etcd"
	I1008 22:58:16.234288  171796 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 22:58:16.234349  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 22:58:16.269941  171796 cri.go:89] found id: ""
	I1008 22:58:16.269961  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.269969  171796 logs.go:284] No container was found matching "coredns"
	I1008 22:58:16.269975  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 22:58:16.270030  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 22:58:16.304017  171796 cri.go:89] found id: ""
	I1008 22:58:16.304038  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.304046  171796 logs.go:284] No container was found matching "kube-scheduler"
	I1008 22:58:16.304053  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 22:58:16.304110  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 22:58:16.340131  171796 cri.go:89] found id: ""
	I1008 22:58:16.340156  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.340164  171796 logs.go:284] No container was found matching "kube-proxy"
	I1008 22:58:16.340171  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 22:58:16.340228  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 22:58:16.381588  171796 cri.go:89] found id: ""
	I1008 22:58:16.381610  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.381618  171796 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 22:58:16.381625  171796 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 22:58:16.381708  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 22:58:16.425264  171796 cri.go:89] found id: ""
	I1008 22:58:16.425286  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.425294  171796 logs.go:284] No container was found matching "kindnet"
	I1008 22:58:16.425303  171796 logs.go:123] Gathering logs for kubelet ...
	I1008 22:58:16.425314  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 22:58:16.585708  171796 logs.go:123] Gathering logs for dmesg ...
	I1008 22:58:16.586881  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 22:58:16.610434  171796 logs.go:123] Gathering logs for describe nodes ...
	I1008 22:58:16.610516  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 22:58:17.067127  171796 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 22:58:17.056924    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.058088    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.059037    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.060775    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.061073    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 22:58:17.056924    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.058088    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.059037    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.060775    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.061073    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 22:58:17.067153  171796 logs.go:123] Gathering logs for CRI-O ...
	I1008 22:58:17.067166  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 22:58:17.156520  171796 logs.go:123] Gathering logs for container status ...
	I1008 22:58:17.156557  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 22:58:17.208767  171796 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 22:58:17.208818  171796 out.go:285] * 
	* 
	W1008 22:58:17.208871  171796 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:58:17.208889  171796 out.go:285] * 
	* 
	W1008 22:58:17.211057  171796 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:58:17.217037  171796 out.go:203] 
	W1008 22:58:17.219364  171796 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:58:17.219405  171796 out.go:285] * 
	* 
	I1008 22:58:17.225140  171796 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-385382 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-08 22:58:17.664398565 +0000 UTC m=+4077.215667561
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-385382
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-385382:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32b5c84b915d293a8b08796c09c47485ae285f0e67c0a69f29c0667611acdcbe",
	        "Created": "2025-10-08T22:49:54.16136813Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 172195,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:49:54.223184679Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/32b5c84b915d293a8b08796c09c47485ae285f0e67c0a69f29c0667611acdcbe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32b5c84b915d293a8b08796c09c47485ae285f0e67c0a69f29c0667611acdcbe/hostname",
	        "HostsPath": "/var/lib/docker/containers/32b5c84b915d293a8b08796c09c47485ae285f0e67c0a69f29c0667611acdcbe/hosts",
	        "LogPath": "/var/lib/docker/containers/32b5c84b915d293a8b08796c09c47485ae285f0e67c0a69f29c0667611acdcbe/32b5c84b915d293a8b08796c09c47485ae285f0e67c0a69f29c0667611acdcbe-json.log",
	        "Name": "/force-systemd-flag-385382",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-385382:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-385382",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "32b5c84b915d293a8b08796c09c47485ae285f0e67c0a69f29c0667611acdcbe",
	                "LowerDir": "/var/lib/docker/overlay2/06b793f90bd24b15fede8b4adf67ef9f59cfcf0369bc208fb6041b3dcdf76998-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/06b793f90bd24b15fede8b4adf67ef9f59cfcf0369bc208fb6041b3dcdf76998/merged",
	                "UpperDir": "/var/lib/docker/overlay2/06b793f90bd24b15fede8b4adf67ef9f59cfcf0369bc208fb6041b3dcdf76998/diff",
	                "WorkDir": "/var/lib/docker/overlay2/06b793f90bd24b15fede8b4adf67ef9f59cfcf0369bc208fb6041b3dcdf76998/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-385382",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-385382/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-385382",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-385382",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-385382",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23cb64b6f97bcda71991038a1ed9b404c3a0db80c4cb8959cfdcf6cddf2bd320",
	            "SandboxKey": "/var/run/docker/netns/23cb64b6f97b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33041"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33042"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-385382": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:f0:ed:fb:12:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "94ec01d43e41341d7ffc7744f7ba0cdf60f855a5e72c4ff1b8362bf92c9c7634",
	                    "EndpointID": "c42a28d71a2673bceac3d29c5684cccb38a21f9378adba8853e9154d89dc2311",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-385382",
	                        "32b5c84b915d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-385382 -n force-systemd-flag-385382
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-385382 -n force-systemd-flag-385382: exit status 6 (407.8598ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 22:58:18.076636  192126 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-385382" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-385382 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ delete  │ -p cert-expiration-292528                                                                                                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	│ delete  │ -p force-systemd-env-092546                                                                                                                                                                                                                   │ force-systemd-env-092546  │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:52 UTC │
	│ start   │ -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ cert-options-378019 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ -p cert-options-378019 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                                                                                                    │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:57:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:57:14.782613  189215 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:57:14.782899  189215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:57:14.782916  189215 out.go:374] Setting ErrFile to fd 2...
	I1008 22:57:14.782922  189215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:57:14.783293  189215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:57:14.783741  189215 out.go:368] Setting JSON to false
	I1008 22:57:14.784656  189215 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5985,"bootTime":1759958250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:57:14.784745  189215 start.go:141] virtualization:  
	I1008 22:57:14.787916  189215 out.go:179] * [no-preload-939665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:57:14.791714  189215 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:57:14.791882  189215 notify.go:220] Checking for updates...
	I1008 22:57:14.797701  189215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:57:14.800574  189215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:14.803453  189215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:57:14.806361  189215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:57:14.809186  189215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:57:14.812556  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:14.813125  189215 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:57:14.841927  189215 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:57:14.842105  189215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:57:14.898169  189215 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:57:14.888828193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:57:14.898273  189215 docker.go:318] overlay module found
	I1008 22:57:14.901448  189215 out.go:179] * Using the docker driver based on existing profile
	I1008 22:57:14.904243  189215 start.go:305] selected driver: docker
	I1008 22:57:14.904260  189215 start.go:925] validating driver "docker" against &{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:14.904383  189215 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:57:14.905115  189215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:57:14.957085  189215 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:57:14.948430473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:57:14.957449  189215 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:57:14.957477  189215 cni.go:84] Creating CNI manager for ""
	I1008 22:57:14.957535  189215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:57:14.957581  189215 start.go:349] cluster config:
	{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:14.960867  189215 out.go:179] * Starting "no-preload-939665" primary control-plane node in "no-preload-939665" cluster
	I1008 22:57:14.963850  189215 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:57:14.966939  189215 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:57:14.969809  189215 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:57:14.969897  189215 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:57:14.969958  189215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:57:14.970334  189215 cache.go:107] acquiring lock: {Name:mk344f5adac59ef32f6d69c009b0f8ec87052611 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970423  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1008 22:57:14.970437  189215 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.07µs
	I1008 22:57:14.970460  189215 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1008 22:57:14.970475  189215 cache.go:107] acquiring lock: {Name:mk2a1f78f7d6511aea6d634a58ed1c88718aab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970511  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1008 22:57:14.970520  189215 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 46.335µs
	I1008 22:57:14.970527  189215 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1008 22:57:14.970542  189215 cache.go:107] acquiring lock: {Name:mk7141aa7b89df55e8dad25221487d909ba46017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970574  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1008 22:57:14.970582  189215 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 40.935µs
	I1008 22:57:14.970589  189215 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1008 22:57:14.970598  189215 cache.go:107] acquiring lock: {Name:mk49b6b290192d16491277897c30c50e3badc30b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970628  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1008 22:57:14.970638  189215 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.501µs
	I1008 22:57:14.970644  189215 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1008 22:57:14.970653  189215 cache.go:107] acquiring lock: {Name:mka3f9c49147e0e292b0cfd3d6255817b177ac9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970685  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1008 22:57:14.970695  189215 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.691µs
	I1008 22:57:14.970701  189215 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1008 22:57:14.970713  189215 cache.go:107] acquiring lock: {Name:mk85b30d8a79adbfa59b06c1c836919be1606fc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970744  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1008 22:57:14.970753  189215 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 43.012µs
	I1008 22:57:14.970759  189215 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1008 22:57:14.970774  189215 cache.go:107] acquiring lock: {Name:mka1ae807285591bb895528e804cb6d37d5af28f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970800  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1008 22:57:14.970809  189215 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.046µs
	I1008 22:57:14.970815  189215 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1008 22:57:14.970825  189215 cache.go:107] acquiring lock: {Name:mk61bfc3bad4ca73036eaa8d93cb87fd5c241083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970863  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1008 22:57:14.970873  189215 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 50.766µs
	I1008 22:57:14.970880  189215 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1008 22:57:14.970886  189215 cache.go:87] Successfully saved all images to host disk.
	I1008 22:57:14.990397  189215 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:57:14.990422  189215 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:57:14.990442  189215 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:57:14.990471  189215 start.go:360] acquireMachinesLock for no-preload-939665: {Name:mk60e1980ef0e273f848717956362180f47a8fab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.990555  189215 start.go:364] duration metric: took 63.353µs to acquireMachinesLock for "no-preload-939665"
	I1008 22:57:14.990584  189215 start.go:96] Skipping create...Using existing machine configuration
	I1008 22:57:14.990607  189215 fix.go:54] fixHost starting: 
	I1008 22:57:14.990890  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:15.009848  189215 fix.go:112] recreateIfNeeded on no-preload-939665: state=Stopped err=<nil>
	W1008 22:57:15.009885  189215 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 22:57:15.013952  189215 out.go:252] * Restarting existing docker container for "no-preload-939665" ...
	I1008 22:57:15.014066  189215 cli_runner.go:164] Run: docker start no-preload-939665
	I1008 22:57:15.284522  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:15.311136  189215 kic.go:430] container "no-preload-939665" state is running.
	I1008 22:57:15.311522  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:15.331603  189215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:57:15.331823  189215 machine.go:93] provisionDockerMachine start ...
	I1008 22:57:15.331882  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:15.351588  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:15.351896  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:15.351905  189215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:57:15.352659  189215 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:57:18.497516  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:57:18.497540  189215 ubuntu.go:182] provisioning hostname "no-preload-939665"
	I1008 22:57:18.497652  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:18.515142  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:18.515455  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:18.515473  189215 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-939665 && echo "no-preload-939665" | sudo tee /etc/hostname
	I1008 22:57:18.671631  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:57:18.671704  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:18.689144  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:18.689488  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:18.689514  189215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-939665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-939665/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-939665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:57:18.833913  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:57:18.833983  189215 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:57:18.834024  189215 ubuntu.go:190] setting up certificates
	I1008 22:57:18.834042  189215 provision.go:84] configureAuth start
	I1008 22:57:18.834106  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:18.854660  189215 provision.go:143] copyHostCerts
	I1008 22:57:18.854730  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:57:18.854749  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:57:18.854844  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:57:18.854950  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:57:18.854967  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:57:18.855004  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:57:18.855062  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:57:18.855073  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:57:18.855099  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:57:18.855154  189215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.no-preload-939665 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-939665]
	I1008 22:57:19.066124  189215 provision.go:177] copyRemoteCerts
	I1008 22:57:19.066188  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:57:19.066228  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.084957  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.185272  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:57:19.204144  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:57:19.221907  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 22:57:19.239403  189215 provision.go:87] duration metric: took 405.337994ms to configureAuth
	I1008 22:57:19.239432  189215 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:57:19.239668  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:19.239788  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.259287  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:19.259598  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:19.259621  189215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:57:19.574850  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:57:19.574879  189215 machine.go:96] duration metric: took 4.243046683s to provisionDockerMachine
	I1008 22:57:19.574890  189215 start.go:293] postStartSetup for "no-preload-939665" (driver="docker")
	I1008 22:57:19.574901  189215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:57:19.574971  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:57:19.575015  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.593115  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.694140  189215 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:57:19.697805  189215 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:57:19.697837  189215 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:57:19.697849  189215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:57:19.697903  189215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:57:19.697993  189215 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:57:19.698106  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:57:19.706223  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:57:19.724098  189215 start.go:296] duration metric: took 149.193283ms for postStartSetup
	I1008 22:57:19.724176  189215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:57:19.724236  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.742535  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.842716  189215 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:57:19.847703  189215 fix.go:56] duration metric: took 4.857097744s for fixHost
	I1008 22:57:19.847773  189215 start.go:83] releasing machines lock for "no-preload-939665", held for 4.857203623s
	I1008 22:57:19.847881  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:19.865178  189215 ssh_runner.go:195] Run: cat /version.json
	I1008 22:57:19.865223  189215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:57:19.865233  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.865286  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.885811  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.891213  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:20.088495  189215 ssh_runner.go:195] Run: systemctl --version
	I1008 22:57:20.095529  189215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:57:20.132456  189215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:57:20.137397  189215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:57:20.137500  189215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:57:20.146025  189215 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 22:57:20.146049  189215 start.go:495] detecting cgroup driver to use...
	I1008 22:57:20.146113  189215 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:57:20.146179  189215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:57:20.161810  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:57:20.175319  189215 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:57:20.175421  189215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:57:20.191090  189215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:57:20.204457  189215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:57:20.315736  189215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:57:20.440129  189215 docker.go:234] disabling docker service ...
	I1008 22:57:20.440216  189215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:57:20.455361  189215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:57:20.469076  189215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:57:20.586412  189215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:57:20.706047  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:57:20.718719  189215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:57:20.732049  189215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:57:20.732141  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.740752  189215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:57:20.740813  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.749357  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.758257  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.767201  189215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:57:20.775190  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.783656  189215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.791696  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.800386  189215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:57:20.808060  189215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:57:20.815631  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:20.925930  189215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:57:21.067006  189215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:57:21.067071  189215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:57:21.070804  189215 start.go:563] Will wait 60s for crictl version
	I1008 22:57:21.070866  189215 ssh_runner.go:195] Run: which crictl
	I1008 22:57:21.074187  189215 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:57:21.098882  189215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:57:21.099037  189215 ssh_runner.go:195] Run: crio --version
	I1008 22:57:21.129152  189215 ssh_runner.go:195] Run: crio --version
	I1008 22:57:21.159678  189215 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:57:21.162526  189215 cli_runner.go:164] Run: docker network inspect no-preload-939665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:57:21.182847  189215 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:57:21.186792  189215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:57:21.196569  189215 kubeadm.go:883] updating cluster {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:57:21.196696  189215 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:57:21.196743  189215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:57:21.234573  189215 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:57:21.234598  189215 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:57:21.234606  189215 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 22:57:21.234750  189215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-939665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:57:21.234830  189215 ssh_runner.go:195] Run: crio config
	I1008 22:57:21.292904  189215 cni.go:84] Creating CNI manager for ""
	I1008 22:57:21.292932  189215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:57:21.292950  189215 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:57:21.292972  189215 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-939665 NodeName:no-preload-939665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:57:21.293101  189215 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-939665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:57:21.293173  189215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:57:21.301074  189215 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:57:21.301163  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:57:21.308677  189215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 22:57:21.321204  189215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:57:21.333547  189215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1008 22:57:21.346162  189215 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:57:21.350364  189215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:57:21.360170  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:21.467099  189215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:57:21.481987  189215 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665 for IP: 192.168.85.2
	I1008 22:57:21.482060  189215 certs.go:195] generating shared ca certs ...
	I1008 22:57:21.482092  189215 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:21.482258  189215 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:57:21.482339  189215 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:57:21.482373  189215 certs.go:257] generating profile certs ...
	I1008 22:57:21.482513  189215 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.key
	I1008 22:57:21.482622  189215 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key.108ea954
	I1008 22:57:21.482693  189215 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key
	I1008 22:57:21.482836  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:57:21.482893  189215 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:57:21.482922  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:57:21.482982  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:57:21.483035  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:57:21.483093  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:57:21.483163  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:57:21.483813  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:57:21.502778  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:57:21.520733  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:57:21.537842  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:57:21.559178  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 22:57:21.579614  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:57:21.600183  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:57:21.622833  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:57:21.643796  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:57:21.664903  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:57:21.687300  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:57:21.707757  189215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:57:21.721225  189215 ssh_runner.go:195] Run: openssl version
	I1008 22:57:21.727932  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:57:21.736231  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.740177  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.740254  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.787097  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:57:21.794806  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:57:21.802792  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.809303  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.809402  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.851934  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:57:21.860228  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:57:21.868365  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.872140  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.872222  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.913723  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:57:21.921382  189215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:57:21.925115  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 22:57:21.966240  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 22:57:22.008960  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 22:57:22.050751  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 22:57:22.105518  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 22:57:22.176820  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 22:57:22.232949  189215 kubeadm.go:400] StartCluster: {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:22.233035  189215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:57:22.233090  189215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:57:22.302288  189215 cri.go:89] found id: "22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3"
	I1008 22:57:22.302311  189215 cri.go:89] found id: "f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d"
	I1008 22:57:22.302317  189215 cri.go:89] found id: "e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e"
	I1008 22:57:22.302331  189215 cri.go:89] found id: "fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee"
	I1008 22:57:22.302335  189215 cri.go:89] found id: ""
	I1008 22:57:22.302398  189215 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 22:57:22.323705  189215 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:57:22.323795  189215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:57:22.336052  189215 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 22:57:22.336070  189215 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 22:57:22.336119  189215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 22:57:22.351401  189215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:57:22.351898  189215 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-939665" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:22.352007  189215 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-939665" cluster setting kubeconfig missing "no-preload-939665" context setting]
	I1008 22:57:22.352298  189215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.353574  189215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 22:57:22.366548  189215 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 22:57:22.366580  189215 kubeadm.go:601] duration metric: took 30.503126ms to restartPrimaryControlPlane
	I1008 22:57:22.366591  189215 kubeadm.go:402] duration metric: took 133.650455ms to StartCluster
	I1008 22:57:22.366606  189215 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.366672  189215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:22.367360  189215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.367593  189215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:57:22.367913  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:22.367964  189215 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:57:22.368030  189215 addons.go:69] Setting storage-provisioner=true in profile "no-preload-939665"
	I1008 22:57:22.368057  189215 addons.go:238] Setting addon storage-provisioner=true in "no-preload-939665"
	W1008 22:57:22.368070  189215 addons.go:247] addon storage-provisioner should already be in state true
	I1008 22:57:22.368095  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.368651  189215 addons.go:69] Setting dashboard=true in profile "no-preload-939665"
	I1008 22:57:22.368675  189215 addons.go:238] Setting addon dashboard=true in "no-preload-939665"
	W1008 22:57:22.368687  189215 addons.go:247] addon dashboard should already be in state true
	I1008 22:57:22.368707  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.369171  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.369483  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.369909  189215 addons.go:69] Setting default-storageclass=true in profile "no-preload-939665"
	I1008 22:57:22.369931  189215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-939665"
	I1008 22:57:22.370206  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.373239  189215 out.go:179] * Verifying Kubernetes components...
	I1008 22:57:22.376522  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:22.435767  189215 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 22:57:22.435850  189215 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:57:22.437591  189215 addons.go:238] Setting addon default-storageclass=true in "no-preload-939665"
	W1008 22:57:22.437613  189215 addons.go:247] addon default-storageclass should already be in state true
	I1008 22:57:22.437660  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.438223  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.438823  189215 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:57:22.438847  189215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:57:22.438905  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.442056  189215 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 22:57:22.453816  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 22:57:22.453845  189215 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 22:57:22.453928  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.489201  189215 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:57:22.489223  189215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:57:22.489292  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.493728  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.506546  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.526057  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.731621  189215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:57:22.754398  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:57:22.802907  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 22:57:22.802933  189215 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 22:57:22.820730  189215 node_ready.go:35] waiting up to 6m0s for node "no-preload-939665" to be "Ready" ...
	I1008 22:57:22.834337  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:57:22.867847  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 22:57:22.867873  189215 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 22:57:22.958881  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 22:57:22.959044  189215 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 22:57:23.009828  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 22:57:23.009895  189215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 22:57:23.051687  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 22:57:23.051760  189215 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 22:57:23.078571  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 22:57:23.078656  189215 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 22:57:23.095149  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 22:57:23.095223  189215 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 22:57:23.110652  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 22:57:23.110724  189215 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 22:57:23.146912  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:57:23.146978  189215 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 22:57:23.172565  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:57:26.428523  189215 node_ready.go:49] node "no-preload-939665" is "Ready"
	I1008 22:57:26.428555  189215 node_ready.go:38] duration metric: took 3.607792114s for node "no-preload-939665" to be "Ready" ...
	I1008 22:57:26.428570  189215 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:57:26.428661  189215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:57:26.609530  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.855058214s)
	I1008 22:57:27.790649  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.956230897s)
	I1008 22:57:27.790861  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.618214083s)
	I1008 22:57:27.791096  189215 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.362417244s)
	I1008 22:57:27.791160  189215 api_server.go:72] duration metric: took 5.423535251s to wait for apiserver process to appear ...
	I1008 22:57:27.791181  189215 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:57:27.791226  189215 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:57:27.794370  189215 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-939665 addons enable metrics-server
	
	I1008 22:57:27.797309  189215 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1008 22:57:27.800172  189215 addons.go:514] duration metric: took 5.432173441s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1008 22:57:27.808726  189215 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 22:57:27.808753  189215 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 22:57:28.291323  189215 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:57:28.299370  189215 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 22:57:28.300462  189215 api_server.go:141] control plane version: v1.34.1
	I1008 22:57:28.300485  189215 api_server.go:131] duration metric: took 509.284275ms to wait for apiserver health ...
	I1008 22:57:28.300495  189215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:57:28.303496  189215 system_pods.go:59] 8 kube-system pods found
	I1008 22:57:28.303535  189215 system_pods.go:61] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:57:28.303567  189215 system_pods.go:61] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:57:28.303578  189215 system_pods.go:61] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:57:28.303587  189215 system_pods.go:61] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:57:28.303604  189215 system_pods.go:61] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:57:28.303610  189215 system_pods.go:61] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:57:28.303617  189215 system_pods.go:61] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:57:28.303636  189215 system_pods.go:61] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Running
	I1008 22:57:28.303647  189215 system_pods.go:74] duration metric: took 3.14283ms to wait for pod list to return data ...
	I1008 22:57:28.303663  189215 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:57:28.306192  189215 default_sa.go:45] found service account: "default"
	I1008 22:57:28.306220  189215 default_sa.go:55] duration metric: took 2.550603ms for default service account to be created ...
	I1008 22:57:28.306230  189215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:57:28.308825  189215 system_pods.go:86] 8 kube-system pods found
	I1008 22:57:28.308858  189215 system_pods.go:89] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:57:28.308868  189215 system_pods.go:89] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:57:28.308874  189215 system_pods.go:89] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:57:28.308881  189215 system_pods.go:89] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:57:28.308888  189215 system_pods.go:89] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:57:28.308892  189215 system_pods.go:89] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:57:28.308899  189215 system_pods.go:89] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:57:28.308909  189215 system_pods.go:89] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Running
	I1008 22:57:28.308915  189215 system_pods.go:126] duration metric: took 2.680204ms to wait for k8s-apps to be running ...
	I1008 22:57:28.308929  189215 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:57:28.308984  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:57:28.322889  189215 system_svc.go:56] duration metric: took 13.951449ms WaitForService to wait for kubelet
	I1008 22:57:28.322918  189215 kubeadm.go:586] duration metric: took 5.955290813s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:57:28.322958  189215 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:57:28.328387  189215 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:57:28.328425  189215 node_conditions.go:123] node cpu capacity is 2
	I1008 22:57:28.328438  189215 node_conditions.go:105] duration metric: took 5.467412ms to run NodePressure ...
	I1008 22:57:28.328451  189215 start.go:241] waiting for startup goroutines ...
	I1008 22:57:28.328458  189215 start.go:246] waiting for cluster config update ...
	I1008 22:57:28.328473  189215 start.go:255] writing updated cluster config ...
	I1008 22:57:28.328760  189215 ssh_runner.go:195] Run: rm -f paused
	I1008 22:57:28.332532  189215 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:57:28.336688  189215 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 22:57:30.350864  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:32.843064  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:34.844285  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:36.845168  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:39.344059  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:41.842645  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:43.843301  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:46.342737  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:48.842335  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:50.842730  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:52.842806  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:55.342860  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:57.844337  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:58:00.353944  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	I1008 22:58:01.343135  189215 pod_ready.go:94] pod "coredns-66bc5c9577-wj8wf" is "Ready"
	I1008 22:58:01.343163  189215 pod_ready.go:86] duration metric: took 33.006442095s for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.346159  189215 pod_ready.go:83] waiting for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.350999  189215 pod_ready.go:94] pod "etcd-no-preload-939665" is "Ready"
	I1008 22:58:01.351028  189215 pod_ready.go:86] duration metric: took 4.841796ms for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.353471  189215 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.358536  189215 pod_ready.go:94] pod "kube-apiserver-no-preload-939665" is "Ready"
	I1008 22:58:01.358567  189215 pod_ready.go:86] duration metric: took 5.065093ms for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.361323  189215 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.541059  189215 pod_ready.go:94] pod "kube-controller-manager-no-preload-939665" is "Ready"
	I1008 22:58:01.541090  189215 pod_ready.go:86] duration metric: took 179.740333ms for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.741235  189215 pod_ready.go:83] waiting for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.141610  189215 pod_ready.go:94] pod "kube-proxy-77lvp" is "Ready"
	I1008 22:58:02.141660  189215 pod_ready.go:86] duration metric: took 400.391388ms for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.340814  189215 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.741219  189215 pod_ready.go:94] pod "kube-scheduler-no-preload-939665" is "Ready"
	I1008 22:58:02.741265  189215 pod_ready.go:86] duration metric: took 400.423027ms for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.741278  189215 pod_ready.go:40] duration metric: took 34.408667436s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:58:02.798065  189215 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:58:02.802090  189215 out.go:179] * Done! kubectl is now configured to use "no-preload-939665" cluster and "default" namespace by default
	I1008 22:58:16.165730  171796 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	I1008 22:58:16.166225  171796 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	I1008 22:58:16.166324  171796 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	I1008 22:58:16.166332  171796 kubeadm.go:318] 
	I1008 22:58:16.166422  171796 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:58:16.166504  171796 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:58:16.166591  171796 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:58:16.167679  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:58:16.167775  171796 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:58:16.168263  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:58:16.168455  171796 kubeadm.go:318] 
	I1008 22:58:16.172618  171796 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:58:16.172842  171796 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:58:16.172947  171796 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:58:16.173497  171796 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 22:58:16.173565  171796 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 22:58:16.173617  171796 kubeadm.go:402] duration metric: took 8m14.733249742s to StartCluster
	I1008 22:58:16.173680  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 22:58:16.173740  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 22:58:16.201140  171796 cri.go:89] found id: ""
	I1008 22:58:16.201170  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.201184  171796 logs.go:284] No container was found matching "kube-apiserver"
	I1008 22:58:16.201191  171796 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 22:58:16.201248  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 22:58:16.234251  171796 cri.go:89] found id: ""
	I1008 22:58:16.234272  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.234280  171796 logs.go:284] No container was found matching "etcd"
	I1008 22:58:16.234288  171796 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 22:58:16.234349  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 22:58:16.269941  171796 cri.go:89] found id: ""
	I1008 22:58:16.269961  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.269969  171796 logs.go:284] No container was found matching "coredns"
	I1008 22:58:16.269975  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 22:58:16.270030  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 22:58:16.304017  171796 cri.go:89] found id: ""
	I1008 22:58:16.304038  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.304046  171796 logs.go:284] No container was found matching "kube-scheduler"
	I1008 22:58:16.304053  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 22:58:16.304110  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 22:58:16.340131  171796 cri.go:89] found id: ""
	I1008 22:58:16.340156  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.340164  171796 logs.go:284] No container was found matching "kube-proxy"
	I1008 22:58:16.340171  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 22:58:16.340228  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 22:58:16.381588  171796 cri.go:89] found id: ""
	I1008 22:58:16.381610  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.381618  171796 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 22:58:16.381625  171796 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 22:58:16.381708  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 22:58:16.425264  171796 cri.go:89] found id: ""
	I1008 22:58:16.425286  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.425294  171796 logs.go:284] No container was found matching "kindnet"
	I1008 22:58:16.425303  171796 logs.go:123] Gathering logs for kubelet ...
	I1008 22:58:16.425314  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 22:58:16.585708  171796 logs.go:123] Gathering logs for dmesg ...
	I1008 22:58:16.586881  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 22:58:16.610434  171796 logs.go:123] Gathering logs for describe nodes ...
	I1008 22:58:16.610516  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 22:58:17.067127  171796 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 22:58:17.056924    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.058088    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.059037    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.060775    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.061073    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 22:58:17.056924    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.058088    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.059037    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.060775    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.061073    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 22:58:17.067153  171796 logs.go:123] Gathering logs for CRI-O ...
	I1008 22:58:17.067166  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 22:58:17.156520  171796 logs.go:123] Gathering logs for container status ...
	I1008 22:58:17.156557  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 22:58:17.208767  171796 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 22:58:17.208818  171796 out.go:285] * 
	W1008 22:58:17.208871  171796 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:58:17.208889  171796 out.go:285] * 
	W1008 22:58:17.211057  171796 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:58:17.217037  171796 out.go:203] 
	W1008 22:58:17.219364  171796 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:58:17.219405  171796 out.go:285] * 
	I1008 22:58:17.225140  171796 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 22:58:09 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:09.707047664Z" level=info msg="createCtr: removing container 8a7a20423a3c097093095d029c54d882577e044977ebb0ed4c9511fe2205a849" id=8dfd8730-6f9c-4799-bbce-708dadcf7f65 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:09 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:09.707083791Z" level=info msg="createCtr: deleting container 8a7a20423a3c097093095d029c54d882577e044977ebb0ed4c9511fe2205a849 from storage" id=8dfd8730-6f9c-4799-bbce-708dadcf7f65 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:09 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:09.709878507Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-385382_kube-system_161554915c528da1cf1a2ae68f28169f_0" id=8dfd8730-6f9c-4799-bbce-708dadcf7f65 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.666245242Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=68ba71c7-c4c2-44c4-a5a8-f07e3256ef78 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.670380618Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ed1add21-ac0a-421f-a524-a6d5bb5c6f51 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.67156551Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-flag-385382/kube-controller-manager" id=16706521-be1a-49ae-a09c-005157ed000f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.671944534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.67803016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.679060957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.691919684Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=16706521-be1a-49ae-a09c-005157ed000f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.693091677Z" level=info msg="createCtr: deleting container ID b1f5693bfbd78565190ad51a770f5a73671e7c694aee888ac705597b3152e2fd from idIndex" id=16706521-be1a-49ae-a09c-005157ed000f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.693134878Z" level=info msg="createCtr: removing container b1f5693bfbd78565190ad51a770f5a73671e7c694aee888ac705597b3152e2fd" id=16706521-be1a-49ae-a09c-005157ed000f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.693173902Z" level=info msg="createCtr: deleting container b1f5693bfbd78565190ad51a770f5a73671e7c694aee888ac705597b3152e2fd from storage" id=16706521-be1a-49ae-a09c-005157ed000f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:12 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:12.695916178Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-385382_kube-system_b529e201c7873b78350eb3028c0ec237_0" id=16706521-be1a-49ae-a09c-005157ed000f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.664929627Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=3f31ad15-5a50-4f58-b996-4bab50ed7f5a name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.665755343Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=50ba0503-3d9c-454b-9228-44e7e8aefe49 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.666560785Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-flag-385382/kube-apiserver" id=0bae6674-81d7-44ad-a471-453d784b610a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.666795134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.671998099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.672465198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.683652472Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0bae6674-81d7-44ad-a471-453d784b610a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.684818992Z" level=info msg="createCtr: deleting container ID 7ce2598def8524bc7d264005cdf6d27e0055dc6ce318ce96efddf2003d606c4e from idIndex" id=0bae6674-81d7-44ad-a471-453d784b610a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.684863185Z" level=info msg="createCtr: removing container 7ce2598def8524bc7d264005cdf6d27e0055dc6ce318ce96efddf2003d606c4e" id=0bae6674-81d7-44ad-a471-453d784b610a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.684901216Z" level=info msg="createCtr: deleting container 7ce2598def8524bc7d264005cdf6d27e0055dc6ce318ce96efddf2003d606c4e from storage" id=0bae6674-81d7-44ad-a471-453d784b610a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:58:15 force-systemd-flag-385382 crio[836]: time="2025-10-08T22:58:15.687580714Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-flag-385382_kube-system_1c336135d2179200736f713f5e56b030_0" id=0bae6674-81d7-44ad-a471-453d784b610a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 22:58:18.970118    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:18.970554    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:18.971742    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:18.972448    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:18.973748    2484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:58:19 up  1:40,  0 user,  load average: 1.51, 1.47, 1.66
	Linux force-systemd-flag-385382 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 22:58:09 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:09.711147    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 22:58:09 force-systemd-flag-385382 kubelet[1778]:         container etcd start failed in pod etcd-force-systemd-flag-385382_kube-system(161554915c528da1cf1a2ae68f28169f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:58:09 force-systemd-flag-385382 kubelet[1778]:  > logger="UnhandledError"
	Oct 08 22:58:09 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:09.711178    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-flag-385382" podUID="161554915c528da1cf1a2ae68f28169f"
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:12.313159    1778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-385382?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]: I1008 22:58:12.500290    1778 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-385382"
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:12.500687    1778 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-flag-385382"
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:12.665613    1778 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-385382\" not found" node="force-systemd-flag-385382"
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:12.696281    1778 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]:  > podSandboxID="35bcc3da70dcee1c29d70964b32e5031388c6de0dff12eda26cf4b706cd573f5"
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:12.696387    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-flag-385382_kube-system(b529e201c7873b78350eb3028c0ec237): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]:  > logger="UnhandledError"
	Oct 08 22:58:12 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:12.696421    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-flag-385382" podUID="b529e201c7873b78350eb3028c0ec237"
	Oct 08 22:58:13 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:13.210355    1778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-385382.186ca5fbe5cd5ca0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-385382,UID:force-systemd-flag-385382,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-385382 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-385382,},FirstTimestamp:2025-10-08 22:54:15.705836704 +0000 UTC m=+0.555617049,LastTimestamp:2025-10-08 22:54:15.705836704 +0000 UTC m=+0.555617049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-385382,}"
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:15.664423    1778 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-385382\" not found" node="force-systemd-flag-385382"
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:15.687862    1778 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]:  > podSandboxID="5916c26c5f680f52c23301b2ac24fb25a3c5f91d4aee05b33e23f34f3c5289ad"
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:15.688062    1778 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-flag-385382_kube-system(1c336135d2179200736f713f5e56b030): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]:  > logger="UnhandledError"
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:15.688221    1778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-flag-385382" podUID="1c336135d2179200736f713f5e56b030"
	Oct 08 22:58:15 force-systemd-flag-385382 kubelet[1778]: E1008 22:58:15.739075    1778 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-385382\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-385382 -n force-systemd-flag-385382
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-385382 -n force-systemd-flag-385382: exit status 6 (448.86275ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 22:58:19.558558  192507 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-385382" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-385382" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-385382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-385382
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-385382: (2.327765922s)
--- FAIL: TestForceSystemdFlag (513.24s)

                                                
                                    
x
+
TestForceSystemdEnv (522s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-092546 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-092546 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m38.462264883s)

                                                
                                                
-- stdout --
	* [force-systemd-env-092546] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-092546" primary control-plane node in "force-systemd-env-092546" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:44:03.551016  155694 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:44:03.551202  155694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:44:03.551214  155694 out.go:374] Setting ErrFile to fd 2...
	I1008 22:44:03.551219  155694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:44:03.551493  155694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:44:03.551904  155694 out.go:368] Setting JSON to false
	I1008 22:44:03.552765  155694 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5194,"bootTime":1759958250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:44:03.552832  155694 start.go:141] virtualization:  
	I1008 22:44:03.558761  155694 out.go:179] * [force-systemd-env-092546] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:44:03.562325  155694 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:44:03.562339  155694 notify.go:220] Checking for updates...
	I1008 22:44:03.569396  155694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:44:03.572491  155694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:44:03.575621  155694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:44:03.578645  155694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:44:03.581584  155694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1008 22:44:03.585236  155694 config.go:182] Loaded profile config "running-upgrade-450799": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1008 22:44:03.585331  155694 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:44:03.617237  155694 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:44:03.617359  155694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:44:03.692359  155694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:44:03.682718022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:44:03.692470  155694 docker.go:318] overlay module found
	I1008 22:44:03.695816  155694 out.go:179] * Using the docker driver based on user configuration
	I1008 22:44:03.698806  155694 start.go:305] selected driver: docker
	I1008 22:44:03.698827  155694 start.go:925] validating driver "docker" against <nil>
	I1008 22:44:03.698841  155694 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:44:03.699541  155694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:44:03.785405  155694 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:44:03.771216956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:44:03.785567  155694 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:44:03.785832  155694 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 22:44:03.788856  155694 out.go:179] * Using Docker driver with root privileges
	I1008 22:44:03.791861  155694 cni.go:84] Creating CNI manager for ""
	I1008 22:44:03.791942  155694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:44:03.791958  155694 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:44:03.792068  155694 start.go:349] cluster config:
	{Name:force-systemd-env-092546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-092546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:44:03.795366  155694 out.go:179] * Starting "force-systemd-env-092546" primary control-plane node in "force-systemd-env-092546" cluster
	I1008 22:44:03.798168  155694 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:44:03.801034  155694 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:44:03.803807  155694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:44:03.803884  155694 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 22:44:03.803900  155694 cache.go:58] Caching tarball of preloaded images
	I1008 22:44:03.803987  155694 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:44:03.804002  155694 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 22:44:03.804106  155694 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/config.json ...
	I1008 22:44:03.804131  155694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/config.json: {Name:mkb8460ad87d4ffe46bfba2a7b92ab9387725e68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:03.804293  155694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:44:03.830406  155694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:44:03.830431  155694 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:44:03.830452  155694 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:44:03.830475  155694 start.go:360] acquireMachinesLock for force-systemd-env-092546: {Name:mk76df84d54942e84358bdf0649f904f1d5cdb5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:44:03.830576  155694 start.go:364] duration metric: took 80.888µs to acquireMachinesLock for "force-systemd-env-092546"
	I1008 22:44:03.830607  155694 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-092546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-092546 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:44:03.830680  155694 start.go:125] createHost starting for "" (driver="docker")
	I1008 22:44:03.834062  155694 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:44:03.834297  155694 start.go:159] libmachine.API.Create for "force-systemd-env-092546" (driver="docker")
	I1008 22:44:03.834344  155694 client.go:168] LocalClient.Create starting
	I1008 22:44:03.834431  155694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:44:03.834469  155694 main.go:141] libmachine: Decoding PEM data...
	I1008 22:44:03.834491  155694 main.go:141] libmachine: Parsing certificate...
	I1008 22:44:03.834545  155694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:44:03.834566  155694 main.go:141] libmachine: Decoding PEM data...
	I1008 22:44:03.834580  155694 main.go:141] libmachine: Parsing certificate...
	I1008 22:44:03.834938  155694 cli_runner.go:164] Run: docker network inspect force-systemd-env-092546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:44:03.855216  155694 cli_runner.go:211] docker network inspect force-systemd-env-092546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:44:03.855314  155694 network_create.go:284] running [docker network inspect force-systemd-env-092546] to gather additional debugging logs...
	I1008 22:44:03.855336  155694 cli_runner.go:164] Run: docker network inspect force-systemd-env-092546
	W1008 22:44:03.873725  155694 cli_runner.go:211] docker network inspect force-systemd-env-092546 returned with exit code 1
	I1008 22:44:03.873758  155694 network_create.go:287] error running [docker network inspect force-systemd-env-092546]: docker network inspect force-systemd-env-092546: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-092546 not found
	I1008 22:44:03.873772  155694 network_create.go:289] output of [docker network inspect force-systemd-env-092546]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-092546 not found
	
	** /stderr **
	I1008 22:44:03.873877  155694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:44:03.891728  155694 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:44:03.892025  155694 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:44:03.892291  155694 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:44:03.892581  155694 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e8e1360946fd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:66:bf:64:93:e2} reservation:<nil>}
	I1008 22:44:03.892993  155694 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a19c00}
	I1008 22:44:03.893016  155694 network_create.go:124] attempt to create docker network force-systemd-env-092546 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 22:44:03.893077  155694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-092546 force-systemd-env-092546
	I1008 22:44:03.962815  155694 network_create.go:108] docker network force-systemd-env-092546 192.168.85.0/24 created
	I1008 22:44:03.962851  155694 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-092546" container
	I1008 22:44:03.962937  155694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:44:03.980662  155694 cli_runner.go:164] Run: docker volume create force-systemd-env-092546 --label name.minikube.sigs.k8s.io=force-systemd-env-092546 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:44:03.998078  155694 oci.go:103] Successfully created a docker volume force-systemd-env-092546
	I1008 22:44:03.998179  155694 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-092546-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-092546 --entrypoint /usr/bin/test -v force-systemd-env-092546:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:44:04.625932  155694 oci.go:107] Successfully prepared a docker volume force-systemd-env-092546
	I1008 22:44:04.625984  155694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:44:04.626014  155694 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:44:04.626084  155694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-092546:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:44:09.411789  155694 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-092546:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.785651621s)
	I1008 22:44:09.411829  155694 kic.go:203] duration metric: took 4.785811492s to extract preloaded images to volume ...
	W1008 22:44:09.412036  155694 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:44:09.412199  155694 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:44:09.498594  155694 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-092546 --name force-systemd-env-092546 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-092546 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-092546 --network force-systemd-env-092546 --ip 192.168.85.2 --volume force-systemd-env-092546:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:44:09.876412  155694 cli_runner.go:164] Run: docker container inspect force-systemd-env-092546 --format={{.State.Running}}
	I1008 22:44:09.906261  155694 cli_runner.go:164] Run: docker container inspect force-systemd-env-092546 --format={{.State.Status}}
	I1008 22:44:09.943713  155694 cli_runner.go:164] Run: docker exec force-systemd-env-092546 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:44:10.012486  155694 oci.go:144] the created container "force-systemd-env-092546" has a running status.
	I1008 22:44:10.012524  155694 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa...
	I1008 22:44:10.398170  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 22:44:10.398263  155694 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:44:10.429709  155694 cli_runner.go:164] Run: docker container inspect force-systemd-env-092546 --format={{.State.Status}}
	I1008 22:44:10.464054  155694 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:44:10.464074  155694 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-092546 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:44:10.555496  155694 cli_runner.go:164] Run: docker container inspect force-systemd-env-092546 --format={{.State.Status}}
	I1008 22:44:10.587231  155694 machine.go:93] provisionDockerMachine start ...
	I1008 22:44:10.587340  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:10.622491  155694 main.go:141] libmachine: Using SSH client type: native
	I1008 22:44:10.622824  155694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33016 <nil> <nil>}
	I1008 22:44:10.622833  155694 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:44:10.623603  155694 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36740->127.0.0.1:33016: read: connection reset by peer
	I1008 22:44:13.777987  155694 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-092546
	
	I1008 22:44:13.778014  155694 ubuntu.go:182] provisioning hostname "force-systemd-env-092546"
	I1008 22:44:13.778107  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:13.805461  155694 main.go:141] libmachine: Using SSH client type: native
	I1008 22:44:13.805788  155694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33016 <nil> <nil>}
	I1008 22:44:13.805801  155694 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-092546 && echo "force-systemd-env-092546" | sudo tee /etc/hostname
	I1008 22:44:13.967738  155694 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-092546
	
	I1008 22:44:13.967893  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:13.986919  155694 main.go:141] libmachine: Using SSH client type: native
	I1008 22:44:13.987258  155694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33016 <nil> <nil>}
	I1008 22:44:13.987283  155694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-092546' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-092546/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-092546' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:44:14.145961  155694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:44:14.146032  155694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:44:14.146067  155694 ubuntu.go:190] setting up certificates
	I1008 22:44:14.146110  155694 provision.go:84] configureAuth start
	I1008 22:44:14.146193  155694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-092546
	I1008 22:44:14.163274  155694 provision.go:143] copyHostCerts
	I1008 22:44:14.163315  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:44:14.163351  155694 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:44:14.163359  155694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:44:14.163437  155694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:44:14.163526  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:44:14.163542  155694 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:44:14.163547  155694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:44:14.163577  155694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:44:14.163623  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:44:14.163638  155694 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:44:14.163641  155694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:44:14.163664  155694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:44:14.163715  155694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-092546 san=[127.0.0.1 192.168.85.2 force-systemd-env-092546 localhost minikube]
	I1008 22:44:15.649283  155694 provision.go:177] copyRemoteCerts
	I1008 22:44:15.649428  155694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:44:15.649494  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:15.667103  155694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33016 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa Username:docker}
	I1008 22:44:15.789606  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 22:44:15.789680  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:44:15.823130  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 22:44:15.823195  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1008 22:44:15.852020  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 22:44:15.852090  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 22:44:15.877029  155694 provision.go:87] duration metric: took 1.730875435s to configureAuth
	I1008 22:44:15.877104  155694 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:44:15.877338  155694 config.go:182] Loaded profile config "force-systemd-env-092546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:44:15.877493  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:15.903182  155694 main.go:141] libmachine: Using SSH client type: native
	I1008 22:44:15.903479  155694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33016 <nil> <nil>}
	I1008 22:44:15.903494  155694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:44:16.358372  155694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:44:16.358398  155694 machine.go:96] duration metric: took 5.771147873s to provisionDockerMachine
	I1008 22:44:16.358415  155694 client.go:171] duration metric: took 12.524060377s to LocalClient.Create
	I1008 22:44:16.358428  155694 start.go:167] duration metric: took 12.524132165s to libmachine.API.Create "force-systemd-env-092546"
	I1008 22:44:16.358445  155694 start.go:293] postStartSetup for "force-systemd-env-092546" (driver="docker")
	I1008 22:44:16.358481  155694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:44:16.358573  155694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:44:16.358653  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:16.385820  155694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33016 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa Username:docker}
	I1008 22:44:16.504150  155694 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:44:16.508089  155694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:44:16.508120  155694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:44:16.508132  155694 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:44:16.508194  155694 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:44:16.508276  155694 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:44:16.508287  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> /etc/ssl/certs/42862.pem
	I1008 22:44:16.508393  155694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:44:16.523723  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:44:16.560002  155694 start.go:296] duration metric: took 201.518313ms for postStartSetup
	I1008 22:44:16.560376  155694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-092546
	I1008 22:44:16.623295  155694 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/config.json ...
	I1008 22:44:16.623592  155694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:44:16.623637  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:16.648800  155694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33016 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa Username:docker}
	I1008 22:44:16.778899  155694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:44:16.789784  155694 start.go:128] duration metric: took 12.959083086s to createHost
	I1008 22:44:16.789810  155694 start.go:83] releasing machines lock for "force-systemd-env-092546", held for 12.959221311s
	I1008 22:44:16.789896  155694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-092546
	I1008 22:44:16.819327  155694 ssh_runner.go:195] Run: cat /version.json
	I1008 22:44:16.819364  155694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:44:16.819385  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:16.819436  155694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-092546
	I1008 22:44:16.857689  155694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33016 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa Username:docker}
	I1008 22:44:16.858499  155694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33016 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-env-092546/id_rsa Username:docker}
	I1008 22:44:16.985495  155694 ssh_runner.go:195] Run: systemctl --version
	I1008 22:44:17.097233  155694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:44:17.172980  155694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:44:17.182632  155694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:44:17.182711  155694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:44:17.236894  155694 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:44:17.236917  155694 start.go:495] detecting cgroup driver to use...
	I1008 22:44:17.236943  155694 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1008 22:44:17.236994  155694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:44:17.264366  155694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:44:17.286048  155694 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:44:17.286113  155694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:44:17.315328  155694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:44:17.350178  155694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:44:17.572413  155694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:44:17.796140  155694 docker.go:234] disabling docker service ...
	I1008 22:44:17.796207  155694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:44:17.831610  155694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:44:17.854400  155694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:44:18.074359  155694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:44:18.273064  155694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:44:18.299441  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:44:18.322992  155694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:44:18.323054  155694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:44:18.336560  155694 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 22:44:18.336625  155694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:44:18.348616  155694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:44:18.357683  155694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:44:18.370362  155694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:44:18.387594  155694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:44:18.405017  155694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:44:18.423726  155694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:44:18.432861  155694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:44:18.447089  155694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:44:18.459012  155694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:44:18.680251  155694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:44:18.874061  155694 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:44:18.874127  155694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:44:18.879337  155694 start.go:563] Will wait 60s for crictl version
	I1008 22:44:18.879399  155694 ssh_runner.go:195] Run: which crictl
	I1008 22:44:18.887250  155694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:44:18.928662  155694 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:44:18.928818  155694 ssh_runner.go:195] Run: crio --version
	I1008 22:44:18.985448  155694 ssh_runner.go:195] Run: crio --version
	I1008 22:44:19.033779  155694 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:44:19.036664  155694 cli_runner.go:164] Run: docker network inspect force-systemd-env-092546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:44:19.073981  155694 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:44:19.078197  155694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:44:19.092456  155694 kubeadm.go:883] updating cluster {Name:force-systemd-env-092546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-092546 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:44:19.092587  155694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:44:19.092650  155694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:44:19.136924  155694 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:44:19.136949  155694 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:44:19.137005  155694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:44:19.190190  155694 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:44:19.190216  155694 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:44:19.190224  155694 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 22:44:19.190311  155694 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-092546 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-092546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:44:19.190399  155694 ssh_runner.go:195] Run: crio config
	I1008 22:44:19.287944  155694 cni.go:84] Creating CNI manager for ""
	I1008 22:44:19.288017  155694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:44:19.288047  155694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:44:19.288101  155694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-092546 NodeName:force-systemd-env-092546 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:44:19.288273  155694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-092546"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:44:19.288374  155694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:44:19.300161  155694 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:44:19.300283  155694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:44:19.316093  155694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1008 22:44:19.339309  155694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:44:19.363981  155694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1008 22:44:19.382858  155694 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:44:19.394131  155694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:44:19.413394  155694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:44:19.580689  155694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:44:19.619013  155694 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546 for IP: 192.168.85.2
	I1008 22:44:19.619081  155694 certs.go:195] generating shared ca certs ...
	I1008 22:44:19.619114  155694 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:19.619331  155694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:44:19.619419  155694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:44:19.619473  155694 certs.go:257] generating profile certs ...
	I1008 22:44:19.619571  155694 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/client.key
	I1008 22:44:19.619605  155694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/client.crt with IP's: []
	I1008 22:44:19.829417  155694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/client.crt ...
	I1008 22:44:19.829495  155694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/client.crt: {Name:mk2a047585e7e0414cc2b14e7f5424c5c99e75fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:19.829751  155694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/client.key ...
	I1008 22:44:19.829791  155694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/client.key: {Name:mkecf21764ae86905cc7379e03f42c49c2da70f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:19.829955  155694 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.key.0ec36f1d
	I1008 22:44:19.829998  155694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.crt.0ec36f1d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1008 22:44:20.975358  155694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.crt.0ec36f1d ...
	I1008 22:44:20.975434  155694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.crt.0ec36f1d: {Name:mk8b9c1aa4a80ce366e4c5f8914e1b62550900ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:20.975668  155694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.key.0ec36f1d ...
	I1008 22:44:20.975704  155694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.key.0ec36f1d: {Name:mk318e58d9e57aa2fe0fffd98b46cf982595875f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:20.975841  155694 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.crt.0ec36f1d -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.crt
	I1008 22:44:20.975974  155694 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.key.0ec36f1d -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.key
	I1008 22:44:20.976094  155694 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.key
	I1008 22:44:20.976145  155694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.crt with IP's: []
	I1008 22:44:21.646641  155694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.crt ...
	I1008 22:44:21.646712  155694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.crt: {Name:mka5a4fc8657f153cb77cf1ad8bdae98b491d88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:21.646942  155694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.key ...
	I1008 22:44:21.646977  155694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.key: {Name:mk10befc751d565136d115bc13efa69f13af20b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:44:21.648681  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 22:44:21.648752  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 22:44:21.648784  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 22:44:21.648831  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 22:44:21.648867  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 22:44:21.648903  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 22:44:21.648945  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 22:44:21.648979  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 22:44:21.649071  155694 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:44:21.649127  155694 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:44:21.649175  155694 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:44:21.649228  155694 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:44:21.649293  155694 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:44:21.649342  155694 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:44:21.649429  155694 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:44:21.649483  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:44:21.649513  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem -> /usr/share/ca-certificates/4286.pem
	I1008 22:44:21.649555  155694 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> /usr/share/ca-certificates/42862.pem
	I1008 22:44:21.650159  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:44:21.667488  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:44:21.686462  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:44:21.705942  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:44:21.726760  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 22:44:21.747411  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:44:21.766025  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:44:21.786224  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-env-092546/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:44:21.809414  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:44:21.830685  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:44:21.850912  155694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:44:21.871703  155694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:44:21.886062  155694 ssh_runner.go:195] Run: openssl version
	I1008 22:44:21.892711  155694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:44:21.903989  155694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:44:21.908602  155694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:44:21.908692  155694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:44:21.952244  155694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:44:21.961626  155694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:44:21.971777  155694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:44:21.976988  155694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:44:21.977056  155694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:44:22.020649  155694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:44:22.035772  155694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:44:22.049041  155694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:44:22.056076  155694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:44:22.056151  155694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:44:22.107632  155694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:44:22.117169  155694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:44:22.123287  155694 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:44:22.123340  155694 kubeadm.go:400] StartCluster: {Name:force-systemd-env-092546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-092546 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:44:22.123423  155694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:44:22.123481  155694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:44:22.162820  155694 cri.go:89] found id: ""
	I1008 22:44:22.162898  155694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:44:22.173382  155694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:44:22.182413  155694 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:44:22.182482  155694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:44:22.193912  155694 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:44:22.193949  155694 kubeadm.go:157] found existing configuration files:
	
	I1008 22:44:22.194001  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:44:22.203534  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:44:22.203610  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:44:22.212464  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:44:22.225342  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:44:22.225406  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:44:22.233739  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:44:22.243183  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:44:22.243246  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:44:22.255556  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:44:22.272772  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:44:22.272839  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:44:22.281008  155694 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:44:22.343837  155694 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:44:22.344192  155694 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:44:22.381335  155694 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:44:22.381411  155694 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:44:22.381454  155694 kubeadm.go:318] OS: Linux
	I1008 22:44:22.381505  155694 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:44:22.381557  155694 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:44:22.381609  155694 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:44:22.381743  155694 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:44:22.381798  155694 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:44:22.381850  155694 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:44:22.381899  155694 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:44:22.381951  155694 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:44:22.382000  155694 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:44:22.482222  155694 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:44:22.482394  155694 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:44:22.482513  155694 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:44:22.501960  155694 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:44:22.506994  155694 out.go:252]   - Generating certificates and keys ...
	I1008 22:44:22.507095  155694 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:44:22.507184  155694 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:44:24.515306  155694 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:44:24.961577  155694 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:44:25.926485  155694 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:44:26.258039  155694 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:44:26.932753  155694 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:44:26.932902  155694 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-092546 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:44:27.204989  155694 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:44:27.205152  155694 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-092546 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:44:28.284493  155694 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:44:28.774869  155694 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:44:29.624469  155694 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:44:29.624816  155694 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:44:30.393185  155694 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:44:31.759293  155694 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:44:32.354118  155694 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:44:32.633475  155694 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:44:33.303399  155694 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:44:33.311118  155694 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:44:33.317259  155694 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:44:33.320726  155694 out.go:252]   - Booting up control plane ...
	I1008 22:44:33.320885  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:44:33.320995  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:44:33.322192  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:44:33.355055  155694 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:44:33.355169  155694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:44:33.365212  155694 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:44:33.365312  155694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:44:33.365352  155694 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:44:33.596220  155694 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:44:33.596341  155694 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:44:35.098034  155694 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501924964s
	I1008 22:44:35.102094  155694 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:44:35.102186  155694 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1008 22:44:35.102284  155694 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:44:35.102990  155694 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:48:35.103237  155694 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000901942s
	I1008 22:48:35.104434  155694 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000686686s
	I1008 22:48:35.104682  155694 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001056569s
	I1008 22:48:35.104694  155694 kubeadm.go:318] 
	I1008 22:48:35.104785  155694 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:48:35.104867  155694 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:48:35.104958  155694 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:48:35.105052  155694 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:48:35.105126  155694 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:48:35.105385  155694 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:48:35.105407  155694 kubeadm.go:318] 
	I1008 22:48:35.110149  155694 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:48:35.110397  155694 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:48:35.110513  155694 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:48:35.111079  155694 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 22:48:35.111154  155694 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1008 22:48:35.111297  155694 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-092546 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-092546 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501924964s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901942s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000686686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001056569s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-092546 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-092546 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501924964s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901942s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000686686s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001056569s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 22:48:35.111379  155694 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 22:48:35.647946  155694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:48:35.661042  155694 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:48:35.661105  155694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:48:35.669310  155694 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:48:35.669329  155694 kubeadm.go:157] found existing configuration files:
	
	I1008 22:48:35.669392  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:48:35.677430  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:48:35.677515  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:48:35.685178  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:48:35.693191  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:48:35.693259  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:48:35.701265  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:48:35.709148  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:48:35.709217  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:48:35.716845  155694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:48:35.726320  155694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:48:35.726386  155694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:48:35.734288  155694 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:48:35.795335  155694 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:48:35.795577  155694 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:48:35.879042  155694 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:52:41.445792  155694 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 22:52:41.445884  155694 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 22:52:41.449836  155694 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:52:41.449900  155694 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:52:41.449993  155694 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:52:41.450055  155694 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:52:41.450095  155694 kubeadm.go:318] OS: Linux
	I1008 22:52:41.450150  155694 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:52:41.450214  155694 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:52:41.450267  155694 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:52:41.450332  155694 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:52:41.450390  155694 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:52:41.450449  155694 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:52:41.450508  155694 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:52:41.450564  155694 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:52:41.450618  155694 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:52:41.450696  155694 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:52:41.450797  155694 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:52:41.450894  155694 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:52:41.450962  155694 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:52:41.454128  155694 out.go:252]   - Generating certificates and keys ...
	I1008 22:52:41.454240  155694 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:52:41.454311  155694 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:52:41.454391  155694 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 22:52:41.454456  155694 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 22:52:41.454529  155694 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 22:52:41.454587  155694 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 22:52:41.454652  155694 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 22:52:41.454719  155694 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 22:52:41.454798  155694 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 22:52:41.454874  155694 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 22:52:41.454915  155694 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 22:52:41.454971  155694 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:52:41.455023  155694 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:52:41.455084  155694 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:52:41.455140  155694 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:52:41.455213  155694 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:52:41.455274  155694 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:52:41.455365  155694 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:52:41.455436  155694 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:52:41.458395  155694 out.go:252]   - Booting up control plane ...
	I1008 22:52:41.458501  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:52:41.458603  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:52:41.458707  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:52:41.458821  155694 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:52:41.458956  155694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:52:41.459078  155694 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:52:41.459167  155694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:52:41.459221  155694 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:52:41.459359  155694 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:52:41.459486  155694 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:52:41.459551  155694 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501759082s
	I1008 22:52:41.459670  155694 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:52:41.459767  155694 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1008 22:52:41.459894  155694 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:52:41.459997  155694 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:52:41.460105  155694 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	I1008 22:52:41.460182  155694 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	I1008 22:52:41.460302  155694 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	I1008 22:52:41.460325  155694 kubeadm.go:318] 
	I1008 22:52:41.460438  155694 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:52:41.460551  155694 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:52:41.460656  155694 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:52:41.460790  155694 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:52:41.460898  155694 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:52:41.460991  155694 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:52:41.461016  155694 kubeadm.go:318] 
	I1008 22:52:41.461056  155694 kubeadm.go:402] duration metric: took 8m19.337719131s to StartCluster
	I1008 22:52:41.461112  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 22:52:41.461184  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 22:52:41.487222  155694 cri.go:89] found id: ""
	I1008 22:52:41.487258  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.487267  155694 logs.go:284] No container was found matching "kube-apiserver"
	I1008 22:52:41.487274  155694 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 22:52:41.487332  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 22:52:41.513761  155694 cri.go:89] found id: ""
	I1008 22:52:41.513787  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.513796  155694 logs.go:284] No container was found matching "etcd"
	I1008 22:52:41.513803  155694 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 22:52:41.513864  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 22:52:41.539757  155694 cri.go:89] found id: ""
	I1008 22:52:41.539785  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.539794  155694 logs.go:284] No container was found matching "coredns"
	I1008 22:52:41.539802  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 22:52:41.539900  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 22:52:41.568039  155694 cri.go:89] found id: ""
	I1008 22:52:41.568061  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.568069  155694 logs.go:284] No container was found matching "kube-scheduler"
	I1008 22:52:41.568079  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 22:52:41.568139  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 22:52:41.597979  155694 cri.go:89] found id: ""
	I1008 22:52:41.598002  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.598011  155694 logs.go:284] No container was found matching "kube-proxy"
	I1008 22:52:41.598018  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 22:52:41.598083  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 22:52:41.623538  155694 cri.go:89] found id: ""
	I1008 22:52:41.623571  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.623580  155694 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 22:52:41.623587  155694 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 22:52:41.623658  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 22:52:41.650750  155694 cri.go:89] found id: ""
	I1008 22:52:41.650771  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.650779  155694 logs.go:284] No container was found matching "kindnet"
	I1008 22:52:41.650788  155694 logs.go:123] Gathering logs for kubelet ...
	I1008 22:52:41.650800  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 22:52:41.738348  155694 logs.go:123] Gathering logs for dmesg ...
	I1008 22:52:41.738382  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 22:52:41.755326  155694 logs.go:123] Gathering logs for describe nodes ...
	I1008 22:52:41.755353  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 22:52:41.822643  155694 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 22:52:41.814386    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.815053    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.816685    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.817415    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.818418    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 22:52:41.814386    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.815053    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.816685    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.817415    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.818418    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 22:52:41.822663  155694 logs.go:123] Gathering logs for CRI-O ...
	I1008 22:52:41.822676  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 22:52:41.897187  155694 logs.go:123] Gathering logs for container status ...
	I1008 22:52:41.897228  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 22:52:41.925765  155694 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 22:52:41.925832  155694 out.go:285] * 
	* 
	W1008 22:52:41.925886  155694 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:52:41.925913  155694 out.go:285] * 
	* 
	W1008 22:52:41.928621  155694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:52:41.934516  155694 out.go:203] 
	W1008 22:52:41.937521  155694 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:52:41.937557  155694 out.go:285] * 
	* 
	I1008 22:52:41.940665  155694 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-092546 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-08 22:52:42.004857757 +0000 UTC m=+3741.556126745
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-092546
helpers_test.go:243: (dbg) docker inspect force-systemd-env-092546:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11cbc78438da5e3f57417d2fa50a97bf91898b5dbb59b672a0c78eb44f91daf3",
	        "Created": "2025-10-08T22:44:09.523868737Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156259,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:44:09.621130635Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/11cbc78438da5e3f57417d2fa50a97bf91898b5dbb59b672a0c78eb44f91daf3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11cbc78438da5e3f57417d2fa50a97bf91898b5dbb59b672a0c78eb44f91daf3/hostname",
	        "HostsPath": "/var/lib/docker/containers/11cbc78438da5e3f57417d2fa50a97bf91898b5dbb59b672a0c78eb44f91daf3/hosts",
	        "LogPath": "/var/lib/docker/containers/11cbc78438da5e3f57417d2fa50a97bf91898b5dbb59b672a0c78eb44f91daf3/11cbc78438da5e3f57417d2fa50a97bf91898b5dbb59b672a0c78eb44f91daf3-json.log",
	        "Name": "/force-systemd-env-092546",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-092546:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-092546",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11cbc78438da5e3f57417d2fa50a97bf91898b5dbb59b672a0c78eb44f91daf3",
	                "LowerDir": "/var/lib/docker/overlay2/056a3b38952a384fa623653eb0a7753f605d57cbf2d7090d0d4131d217b8afdf-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/056a3b38952a384fa623653eb0a7753f605d57cbf2d7090d0d4131d217b8afdf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/056a3b38952a384fa623653eb0a7753f605d57cbf2d7090d0d4131d217b8afdf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/056a3b38952a384fa623653eb0a7753f605d57cbf2d7090d0d4131d217b8afdf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-092546",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-092546/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-092546",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-092546",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-092546",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12a284ce0854f3b16b7dafdb2f9711296740d1e86d4b508e216e36082b5f5832",
	            "SandboxKey": "/var/run/docker/netns/12a284ce0854",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-092546": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:53:ce:56:da:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ddb8fc3ef2132bebd3ddb450f0e4e57dc9dda0143645d52202f65c0d3d58cb2f",
	                    "EndpointID": "2cb65d368efd4095be9234d28e2a3c6dff9ac2f5fe0aa11c8a26365faffa52ed",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-092546",
	                        "11cbc78438da"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-092546 -n force-systemd-env-092546
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-092546 -n force-systemd-env-092546: exit status 6 (384.584876ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 22:52:42.401368  174425 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-092546" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-092546 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-840929 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat docker --no-pager                                                                       │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /etc/docker/daemon.json                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo docker system info                                                                                    │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cri-dockerd --version                                                                                 │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat containerd --no-pager                                                                   │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /etc/containerd/config.toml                                                                       │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo containerd config dump                                                                                │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat crio --no-pager                                                                         │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo crio config                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ delete  │ -p cilium-840929                                                                                                            │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:45 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:46 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ delete  │ -p cert-expiration-292528                                                                                                   │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:49:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:49:48.712236  171796 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:49:48.712360  171796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:49:48.712370  171796 out.go:374] Setting ErrFile to fd 2...
	I1008 22:49:48.712375  171796 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:49:48.712735  171796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:49:48.713214  171796 out.go:368] Setting JSON to false
	I1008 22:49:48.714182  171796 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5539,"bootTime":1759958250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:49:48.714277  171796 start.go:141] virtualization:  
	I1008 22:49:48.717926  171796 out.go:179] * [force-systemd-flag-385382] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:49:48.722658  171796 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:49:48.722716  171796 notify.go:220] Checking for updates...
	I1008 22:49:48.726226  171796 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:49:48.729723  171796 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:49:48.732977  171796 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:49:48.736143  171796 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:49:48.739393  171796 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:49:48.743015  171796 config.go:182] Loaded profile config "force-systemd-env-092546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:49:48.743143  171796 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:49:48.767594  171796 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:49:48.767735  171796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:49:48.828490  171796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:49:48.815021378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:49:48.828610  171796 docker.go:318] overlay module found
	I1008 22:49:48.831820  171796 out.go:179] * Using the docker driver based on user configuration
	I1008 22:49:48.834758  171796 start.go:305] selected driver: docker
	I1008 22:49:48.834777  171796 start.go:925] validating driver "docker" against <nil>
	I1008 22:49:48.834792  171796 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:49:48.835515  171796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:49:48.891047  171796 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:49:48.881925666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:49:48.891198  171796 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:49:48.891435  171796 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 22:49:48.894608  171796 out.go:179] * Using Docker driver with root privileges
	I1008 22:49:48.897546  171796 cni.go:84] Creating CNI manager for ""
	I1008 22:49:48.897620  171796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:49:48.897742  171796 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:49:48.897822  171796 start.go:349] cluster config:
	{Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:49:48.900938  171796 out.go:179] * Starting "force-systemd-flag-385382" primary control-plane node in "force-systemd-flag-385382" cluster
	I1008 22:49:48.903830  171796 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:49:48.906802  171796 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:49:48.909650  171796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:49:48.909682  171796 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:49:48.909703  171796 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 22:49:48.909725  171796 cache.go:58] Caching tarball of preloaded images
	I1008 22:49:48.909808  171796 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:49:48.909817  171796 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 22:49:48.909924  171796 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/config.json ...
	I1008 22:49:48.909953  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/config.json: {Name:mk4724c7d82e25ae3bc0667fb81e54635c623861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:48.929803  171796 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:49:48.929828  171796 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:49:48.929853  171796 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:49:48.929875  171796 start.go:360] acquireMachinesLock for force-systemd-flag-385382: {Name:mk7c40943b856235fde6dc84ba727699096ce250 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:49:48.929976  171796 start.go:364] duration metric: took 83.382µs to acquireMachinesLock for "force-systemd-flag-385382"
	I1008 22:49:48.930007  171796 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:49:48.930076  171796 start.go:125] createHost starting for "" (driver="docker")
	I1008 22:49:48.933700  171796 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:49:48.933965  171796 start.go:159] libmachine.API.Create for "force-systemd-flag-385382" (driver="docker")
	I1008 22:49:48.934014  171796 client.go:168] LocalClient.Create starting
	I1008 22:49:48.934104  171796 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:49:48.934150  171796 main.go:141] libmachine: Decoding PEM data...
	I1008 22:49:48.934169  171796 main.go:141] libmachine: Parsing certificate...
	I1008 22:49:48.934231  171796 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:49:48.934253  171796 main.go:141] libmachine: Decoding PEM data...
	I1008 22:49:48.934263  171796 main.go:141] libmachine: Parsing certificate...
	I1008 22:49:48.934642  171796 cli_runner.go:164] Run: docker network inspect force-systemd-flag-385382 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:49:48.951463  171796 cli_runner.go:211] docker network inspect force-systemd-flag-385382 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:49:48.951556  171796 network_create.go:284] running [docker network inspect force-systemd-flag-385382] to gather additional debugging logs...
	I1008 22:49:48.951580  171796 cli_runner.go:164] Run: docker network inspect force-systemd-flag-385382
	W1008 22:49:48.967413  171796 cli_runner.go:211] docker network inspect force-systemd-flag-385382 returned with exit code 1
	I1008 22:49:48.967452  171796 network_create.go:287] error running [docker network inspect force-systemd-flag-385382]: docker network inspect force-systemd-flag-385382: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-385382 not found
	I1008 22:49:48.967467  171796 network_create.go:289] output of [docker network inspect force-systemd-flag-385382]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-385382 not found
	
	** /stderr **
	I1008 22:49:48.967580  171796 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:49:48.984561  171796 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:49:48.984890  171796 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:49:48.985131  171796 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:49:48.985558  171796 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a11a40}
	I1008 22:49:48.985583  171796 network_create.go:124] attempt to create docker network force-systemd-flag-385382 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1008 22:49:48.985751  171796 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-385382 force-systemd-flag-385382
	I1008 22:49:49.048324  171796 network_create.go:108] docker network force-systemd-flag-385382 192.168.76.0/24 created
	I1008 22:49:49.048359  171796 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-385382" container
	I1008 22:49:49.048446  171796 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:49:49.065296  171796 cli_runner.go:164] Run: docker volume create force-systemd-flag-385382 --label name.minikube.sigs.k8s.io=force-systemd-flag-385382 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:49:49.084256  171796 oci.go:103] Successfully created a docker volume force-systemd-flag-385382
	I1008 22:49:49.084338  171796 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-385382-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-385382 --entrypoint /usr/bin/test -v force-systemd-flag-385382:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:49:49.627212  171796 oci.go:107] Successfully prepared a docker volume force-systemd-flag-385382
	I1008 22:49:49.627269  171796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:49:49.627288  171796 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:49:49.627358  171796 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-385382:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:49:54.086598  171796 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-385382:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.459181997s)
	I1008 22:49:54.086634  171796 kic.go:203] duration metric: took 4.459341777s to extract preloaded images to volume ...
	W1008 22:49:54.086771  171796 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:49:54.086892  171796 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:49:54.145908  171796 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-385382 --name force-systemd-flag-385382 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-385382 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-385382 --network force-systemd-flag-385382 --ip 192.168.76.2 --volume force-systemd-flag-385382:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:49:54.439406  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Running}}
	I1008 22:49:54.464206  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Status}}
	I1008 22:49:54.490208  171796 cli_runner.go:164] Run: docker exec force-systemd-flag-385382 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:49:54.554845  171796 oci.go:144] the created container "force-systemd-flag-385382" has a running status.
	I1008 22:49:54.554885  171796 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa...
	I1008 22:49:55.865931  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1008 22:49:55.865984  171796 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:49:55.886042  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Status}}
	I1008 22:49:55.902706  171796 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:49:55.902730  171796 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-385382 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:49:55.944885  171796 cli_runner.go:164] Run: docker container inspect force-systemd-flag-385382 --format={{.State.Status}}
	I1008 22:49:55.966739  171796 machine.go:93] provisionDockerMachine start ...
	I1008 22:49:55.966846  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:55.985010  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:55.985339  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:55.985358  171796 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:49:56.137775  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-385382
	
	I1008 22:49:56.137800  171796 ubuntu.go:182] provisioning hostname "force-systemd-flag-385382"
	I1008 22:49:56.137894  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:56.156414  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:56.156740  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:56.156759  171796 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-385382 && echo "force-systemd-flag-385382" | sudo tee /etc/hostname
	I1008 22:49:56.310909  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-385382
	
	I1008 22:49:56.311001  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:56.329519  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:56.329868  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:56.329893  171796 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-385382' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-385382/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-385382' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:49:56.477916  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:49:56.477948  171796 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:49:56.477976  171796 ubuntu.go:190] setting up certificates
	I1008 22:49:56.477985  171796 provision.go:84] configureAuth start
	I1008 22:49:56.478059  171796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-385382
	I1008 22:49:56.496395  171796 provision.go:143] copyHostCerts
	I1008 22:49:56.496436  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:49:56.496467  171796 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:49:56.496479  171796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:49:56.496555  171796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:49:56.496642  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:49:56.496670  171796 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:49:56.496681  171796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:49:56.496711  171796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:49:56.496770  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:49:56.496793  171796 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:49:56.496801  171796 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:49:56.496831  171796 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:49:56.496895  171796 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-385382 san=[127.0.0.1 192.168.76.2 force-systemd-flag-385382 localhost minikube]
	I1008 22:49:56.857282  171796 provision.go:177] copyRemoteCerts
	I1008 22:49:56.857356  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:49:56.857411  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:56.874832  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:56.977398  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1008 22:49:56.977465  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:49:56.995163  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1008 22:49:56.995230  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:49:57.015746  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1008 22:49:57.015812  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1008 22:49:57.034292  171796 provision.go:87] duration metric: took 556.288441ms to configureAuth
	I1008 22:49:57.034330  171796 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:49:57.034533  171796 config.go:182] Loaded profile config "force-systemd-flag-385382": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:49:57.034643  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.051931  171796 main.go:141] libmachine: Using SSH client type: native
	I1008 22:49:57.052255  171796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33041 <nil> <nil>}
	I1008 22:49:57.052279  171796 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:49:57.318268  171796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:49:57.318294  171796 machine.go:96] duration metric: took 1.351533543s to provisionDockerMachine
	I1008 22:49:57.318304  171796 client.go:171] duration metric: took 8.384278375s to LocalClient.Create
	I1008 22:49:57.318339  171796 start.go:167] duration metric: took 8.384378922s to libmachine.API.Create "force-systemd-flag-385382"
	I1008 22:49:57.318363  171796 start.go:293] postStartSetup for "force-systemd-flag-385382" (driver="docker")
	I1008 22:49:57.318378  171796 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:49:57.318477  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:49:57.318535  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.337941  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.442062  171796 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:49:57.446156  171796 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:49:57.446189  171796 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:49:57.446203  171796 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:49:57.446261  171796 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:49:57.446352  171796 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:49:57.446365  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> /etc/ssl/certs/42862.pem
	I1008 22:49:57.446468  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:49:57.454690  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:49:57.473501  171796 start.go:296] duration metric: took 155.118367ms for postStartSetup
	I1008 22:49:57.473992  171796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-385382
	I1008 22:49:57.490786  171796 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/config.json ...
	I1008 22:49:57.491079  171796 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:49:57.491139  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.508070  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.606626  171796 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:49:57.611673  171796 start.go:128] duration metric: took 8.681580536s to createHost
	I1008 22:49:57.611700  171796 start.go:83] releasing machines lock for "force-systemd-flag-385382", held for 8.681709587s
	I1008 22:49:57.611774  171796 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-385382
	I1008 22:49:57.628772  171796 ssh_runner.go:195] Run: cat /version.json
	I1008 22:49:57.628804  171796 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:49:57.628824  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.628879  171796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-385382
	I1008 22:49:57.645613  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.648323  171796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33041 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/force-systemd-flag-385382/id_rsa Username:docker}
	I1008 22:49:57.833370  171796 ssh_runner.go:195] Run: systemctl --version
	I1008 22:49:57.840089  171796 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:49:57.880249  171796 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:49:57.884381  171796 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:49:57.884456  171796 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:49:57.913403  171796 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:49:57.913428  171796 start.go:495] detecting cgroup driver to use...
	I1008 22:49:57.913441  171796 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1008 22:49:57.913498  171796 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:49:57.930576  171796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:49:57.944205  171796 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:49:57.944302  171796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:49:57.962011  171796 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:49:57.980810  171796 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:49:58.104608  171796 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:49:58.222664  171796 docker.go:234] disabling docker service ...
	I1008 22:49:58.222747  171796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:49:58.248972  171796 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:49:58.262862  171796 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:49:58.389930  171796 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:49:58.512350  171796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:49:58.525595  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:49:58.540156  171796 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:49:58.540242  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.549119  171796 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1008 22:49:58.549216  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.558308  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.567398  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.575922  171796 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:49:58.584034  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.592779  171796 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.606007  171796 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:49:58.614900  171796 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:49:58.622481  171796 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:49:58.630378  171796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:49:58.737671  171796 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:49:58.862828  171796 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:49:58.862948  171796 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:49:58.866782  171796 start.go:563] Will wait 60s for crictl version
	I1008 22:49:58.866887  171796 ssh_runner.go:195] Run: which crictl
	I1008 22:49:58.870510  171796 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:49:58.899067  171796 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:49:58.899160  171796 ssh_runner.go:195] Run: crio --version
	I1008 22:49:58.926179  171796 ssh_runner.go:195] Run: crio --version
	I1008 22:49:58.960319  171796 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:49:58.963490  171796 cli_runner.go:164] Run: docker network inspect force-systemd-flag-385382 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:49:58.980189  171796 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 22:49:58.984052  171796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:49:58.993800  171796 kubeadm.go:883] updating cluster {Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:49:58.993916  171796 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:49:58.993971  171796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:49:59.025951  171796 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:49:59.025975  171796 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:49:59.026040  171796 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:49:59.054812  171796 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:49:59.054841  171796 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:49:59.054853  171796 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 22:49:59.054956  171796 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-385382 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:49:59.055041  171796 ssh_runner.go:195] Run: crio config
	I1008 22:49:59.129510  171796 cni.go:84] Creating CNI manager for ""
	I1008 22:49:59.129535  171796 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:49:59.129553  171796 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:49:59.129576  171796 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-385382 NodeName:force-systemd-flag-385382 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:49:59.129729  171796 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-385382"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:49:59.129802  171796 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:49:59.138084  171796 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:49:59.138200  171796 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:49:59.146009  171796 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1008 22:49:59.161133  171796 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:49:59.178605  171796 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1008 22:49:59.196778  171796 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:49:59.203550  171796 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:49:59.213577  171796 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:49:59.334736  171796 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:49:59.350400  171796 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382 for IP: 192.168.76.2
	I1008 22:49:59.350422  171796 certs.go:195] generating shared ca certs ...
	I1008 22:49:59.350439  171796 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:59.350574  171796 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:49:59.350623  171796 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:49:59.350635  171796 certs.go:257] generating profile certs ...
	I1008 22:49:59.350690  171796 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.key
	I1008 22:49:59.350717  171796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.crt with IP's: []
	I1008 22:49:59.477834  171796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.crt ...
	I1008 22:49:59.477863  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.crt: {Name:mkcee2ee18d6ccbe255790a5d8793754f69334e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:59.478073  171796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.key ...
	I1008 22:49:59.478090  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/client.key: {Name:mk3fd32a08b0d274ece9c9af9af1e7c02122a456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:49:59.478192  171796 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413
	I1008 22:49:59.478211  171796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1008 22:50:00.088051  171796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413 ...
	I1008 22:50:00.089713  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413: {Name:mk3c66a6004f657b3c1cd121f299b346cab07d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.090032  171796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413 ...
	I1008 22:50:00.101721  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413: {Name:mk6f6444b6ad2a5850f0a82bf2bbb1ad506b7704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.102041  171796 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt.bddc7413 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt
	I1008 22:50:00.102218  171796 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key.bddc7413 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key
	I1008 22:50:00.102350  171796 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key
	I1008 22:50:00.102393  171796 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt with IP's: []
	I1008 22:50:00.963716  171796 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt ...
	I1008 22:50:00.963798  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt: {Name:mkca240e3833bd193a08b4d38da29c0b6b39a649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.964057  171796 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key ...
	I1008 22:50:00.964075  171796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key: {Name:mk09568b6f55406b366b608c84e95860ac10c91f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:50:00.964162  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1008 22:50:00.964189  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1008 22:50:00.964202  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1008 22:50:00.964221  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1008 22:50:00.964233  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1008 22:50:00.964248  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1008 22:50:00.964260  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1008 22:50:00.964276  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1008 22:50:00.964328  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:50:00.964381  171796 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:50:00.964399  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:50:00.964449  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:50:00.964478  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:50:00.964503  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:50:00.964550  171796 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:50:00.964586  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:00.964598  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem -> /usr/share/ca-certificates/4286.pem
	I1008 22:50:00.964610  171796 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> /usr/share/ca-certificates/42862.pem
	I1008 22:50:00.965216  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:50:00.984020  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:50:01.003580  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:50:01.024793  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:50:01.044740  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1008 22:50:01.065284  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:50:01.085980  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:50:01.110682  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/force-systemd-flag-385382/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 22:50:01.139512  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:50:01.165263  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:50:01.196172  171796 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:50:01.216908  171796 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:50:01.233389  171796 ssh_runner.go:195] Run: openssl version
	I1008 22:50:01.240201  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:50:01.249719  171796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:01.253984  171796 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:01.254047  171796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:50:01.296207  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:50:01.305436  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:50:01.314700  171796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:50:01.318757  171796 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:50:01.318847  171796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:50:01.361711  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:50:01.371384  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:50:01.380490  171796 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:50:01.384896  171796 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:50:01.384963  171796 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:50:01.427361  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:50:01.436426  171796 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:50:01.440285  171796 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:50:01.440372  171796 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-385382 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-385382 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:50:01.440463  171796 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:50:01.440528  171796 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:50:01.468795  171796 cri.go:89] found id: ""
	I1008 22:50:01.468866  171796 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:50:01.477392  171796 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:50:01.485949  171796 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:50:01.486047  171796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:50:01.494670  171796 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:50:01.494689  171796 kubeadm.go:157] found existing configuration files:
	
	I1008 22:50:01.494744  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:50:01.502944  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:50:01.503031  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:50:01.510785  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:50:01.518858  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:50:01.518932  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:50:01.526551  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:50:01.535053  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:50:01.535165  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:50:01.543489  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:50:01.551669  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:50:01.551746  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:50:01.560118  171796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:50:01.609923  171796 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:50:01.610312  171796 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:50:01.635874  171796 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:50:01.635957  171796 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:50:01.636002  171796 kubeadm.go:318] OS: Linux
	I1008 22:50:01.636057  171796 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:50:01.636113  171796 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:50:01.636167  171796 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:50:01.636221  171796 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:50:01.636276  171796 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:50:01.636331  171796 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:50:01.636383  171796 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:50:01.636438  171796 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:50:01.636490  171796 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:50:01.713117  171796 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:50:01.713241  171796 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:50:01.713343  171796 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:50:01.726078  171796 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:50:01.732747  171796 out.go:252]   - Generating certificates and keys ...
	I1008 22:50:01.732941  171796 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:50:01.733076  171796 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:50:01.958105  171796 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:50:02.572421  171796 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:50:02.821561  171796 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:50:03.137989  171796 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:50:03.464563  171796 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:50:03.464842  171796 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:50:04.024680  171796 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:50:04.024845  171796 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:50:05.561249  171796 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:50:05.876259  171796 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:50:06.657160  171796 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:50:06.657460  171796 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:50:06.897006  171796 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:50:07.385522  171796 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:50:07.552829  171796 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:50:07.640297  171796 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:50:09.059816  171796 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:50:09.060397  171796 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:50:09.063837  171796 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:50:09.067495  171796 out.go:252]   - Booting up control plane ...
	I1008 22:50:09.067610  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:50:09.067974  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:50:09.068745  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:50:09.085807  171796 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:50:09.086154  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:50:09.094498  171796 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:50:09.094867  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:50:09.094925  171796 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:50:09.236248  171796 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:50:09.236373  171796 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:50:11.240868  171796 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000927309s
	I1008 22:50:11.240982  171796 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:50:11.241077  171796 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1008 22:50:11.241182  171796 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:50:11.241267  171796 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:52:41.445792  155694 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 22:52:41.445884  155694 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 22:52:41.449836  155694 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:52:41.449900  155694 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:52:41.449993  155694 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:52:41.450055  155694 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:52:41.450095  155694 kubeadm.go:318] OS: Linux
	I1008 22:52:41.450150  155694 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:52:41.450214  155694 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:52:41.450267  155694 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:52:41.450332  155694 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:52:41.450390  155694 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:52:41.450449  155694 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:52:41.450508  155694 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:52:41.450564  155694 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:52:41.450618  155694 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:52:41.450696  155694 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:52:41.450797  155694 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:52:41.450894  155694 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:52:41.450962  155694 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:52:41.454128  155694 out.go:252]   - Generating certificates and keys ...
	I1008 22:52:41.454240  155694 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:52:41.454311  155694 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:52:41.454391  155694 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 22:52:41.454456  155694 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 22:52:41.454529  155694 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 22:52:41.454587  155694 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 22:52:41.454652  155694 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 22:52:41.454719  155694 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 22:52:41.454798  155694 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 22:52:41.454874  155694 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 22:52:41.454915  155694 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 22:52:41.454971  155694 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:52:41.455023  155694 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:52:41.455084  155694 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:52:41.455140  155694 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:52:41.455213  155694 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:52:41.455274  155694 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:52:41.455365  155694 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:52:41.455436  155694 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:52:41.458395  155694 out.go:252]   - Booting up control plane ...
	I1008 22:52:41.458501  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:52:41.458603  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:52:41.458707  155694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:52:41.458821  155694 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:52:41.458956  155694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:52:41.459078  155694 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:52:41.459167  155694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:52:41.459221  155694 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:52:41.459359  155694 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:52:41.459486  155694 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:52:41.459551  155694 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501759082s
	I1008 22:52:41.459670  155694 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:52:41.459767  155694 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1008 22:52:41.459894  155694 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:52:41.459997  155694 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:52:41.460105  155694 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	I1008 22:52:41.460182  155694 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	I1008 22:52:41.460302  155694 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	I1008 22:52:41.460325  155694 kubeadm.go:318] 
	I1008 22:52:41.460438  155694 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:52:41.460551  155694 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:52:41.460656  155694 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:52:41.460790  155694 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:52:41.460898  155694 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:52:41.460991  155694 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:52:41.461016  155694 kubeadm.go:318] 
	I1008 22:52:41.461056  155694 kubeadm.go:402] duration metric: took 8m19.337719131s to StartCluster
	I1008 22:52:41.461112  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 22:52:41.461184  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 22:52:41.487222  155694 cri.go:89] found id: ""
	I1008 22:52:41.487258  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.487267  155694 logs.go:284] No container was found matching "kube-apiserver"
	I1008 22:52:41.487274  155694 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 22:52:41.487332  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 22:52:41.513761  155694 cri.go:89] found id: ""
	I1008 22:52:41.513787  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.513796  155694 logs.go:284] No container was found matching "etcd"
	I1008 22:52:41.513803  155694 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 22:52:41.513864  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 22:52:41.539757  155694 cri.go:89] found id: ""
	I1008 22:52:41.539785  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.539794  155694 logs.go:284] No container was found matching "coredns"
	I1008 22:52:41.539802  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 22:52:41.539900  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 22:52:41.568039  155694 cri.go:89] found id: ""
	I1008 22:52:41.568061  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.568069  155694 logs.go:284] No container was found matching "kube-scheduler"
	I1008 22:52:41.568079  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 22:52:41.568139  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 22:52:41.597979  155694 cri.go:89] found id: ""
	I1008 22:52:41.598002  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.598011  155694 logs.go:284] No container was found matching "kube-proxy"
	I1008 22:52:41.598018  155694 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 22:52:41.598083  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 22:52:41.623538  155694 cri.go:89] found id: ""
	I1008 22:52:41.623571  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.623580  155694 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 22:52:41.623587  155694 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 22:52:41.623658  155694 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 22:52:41.650750  155694 cri.go:89] found id: ""
	I1008 22:52:41.650771  155694 logs.go:282] 0 containers: []
	W1008 22:52:41.650779  155694 logs.go:284] No container was found matching "kindnet"
	I1008 22:52:41.650788  155694 logs.go:123] Gathering logs for kubelet ...
	I1008 22:52:41.650800  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 22:52:41.738348  155694 logs.go:123] Gathering logs for dmesg ...
	I1008 22:52:41.738382  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 22:52:41.755326  155694 logs.go:123] Gathering logs for describe nodes ...
	I1008 22:52:41.755353  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 22:52:41.822643  155694 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 22:52:41.814386    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.815053    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.816685    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.817415    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.818418    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 22:52:41.814386    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.815053    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.816685    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.817415    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:41.818418    2392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 22:52:41.822663  155694 logs.go:123] Gathering logs for CRI-O ...
	I1008 22:52:41.822676  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 22:52:41.897187  155694 logs.go:123] Gathering logs for container status ...
	I1008 22:52:41.897228  155694 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 22:52:41.925765  155694 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 22:52:41.925832  155694 out.go:285] * 
	W1008 22:52:41.925886  155694 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:52:41.925913  155694 out.go:285] * 
	W1008 22:52:41.928621  155694 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:52:41.934516  155694 out.go:203] 
	W1008 22:52:41.937521  155694 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501759082s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000015034s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000488329s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001185362s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:52:41.937557  155694 out.go:285] * 
	I1008 22:52:41.940665  155694 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 22:52:31 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:31.189159229Z" level=info msg="createCtr: removing container 5d7e5a970b1e6f34c30769202e6cea0e9f71dc57438f53760ec4a0a04150b1fc" id=30b263a7-a9d9-41e1-af87-97d72f0df469 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:31 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:31.189210167Z" level=info msg="createCtr: deleting container 5d7e5a970b1e6f34c30769202e6cea0e9f71dc57438f53760ec4a0a04150b1fc from storage" id=30b263a7-a9d9-41e1-af87-97d72f0df469 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:31 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:31.192150854Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-092546_kube-system_c587a402c8bf9086c843f8717207c8dd_0" id=30b263a7-a9d9-41e1-af87-97d72f0df469 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.166389364Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ac664246-02fe-4655-85df-f1a08ae7bea0 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.17015063Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=473ca08f-01b7-4b00-bead-d7d018f25141 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.17128428Z" level=info msg="Creating container: kube-system/etcd-force-systemd-env-092546/etcd" id=cfbd112f-c962-42f3-a040-a013d9b9b140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.171632395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.176488508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.177014767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.188344424Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=cfbd112f-c962-42f3-a040-a013d9b9b140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.189518543Z" level=info msg="createCtr: deleting container ID 704b87a9c6f2f96a7713a65b4d006dbf79e90fe6dca766e9af83a31272934978 from idIndex" id=cfbd112f-c962-42f3-a040-a013d9b9b140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.189561382Z" level=info msg="createCtr: removing container 704b87a9c6f2f96a7713a65b4d006dbf79e90fe6dca766e9af83a31272934978" id=cfbd112f-c962-42f3-a040-a013d9b9b140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.189666499Z" level=info msg="createCtr: deleting container 704b87a9c6f2f96a7713a65b4d006dbf79e90fe6dca766e9af83a31272934978 from storage" id=cfbd112f-c962-42f3-a040-a013d9b9b140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:34 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:34.193185769Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-092546_kube-system_c217f9c1a44a6a386045dcf288377cee_0" id=cfbd112f-c962-42f3-a040-a013d9b9b140 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.167152638Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=abeae1e1-f6d1-4323-b10e-b4d3a11f9de5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.168024361Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2dffe00d-9501-4e59-85ec-e3ac2b2e349e name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.168930177Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-092546/kube-apiserver" id=4746ec6f-730f-446c-ac78-881525bc876e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.169271654Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.173870419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.174502869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.185114406Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4746ec6f-730f-446c-ac78-881525bc876e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.186374482Z" level=info msg="createCtr: deleting container ID 926d82129b2dd1bfe97f1303b0e15d9bb550c9e93c33b1eb8b278de964059092 from idIndex" id=4746ec6f-730f-446c-ac78-881525bc876e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.186491808Z" level=info msg="createCtr: removing container 926d82129b2dd1bfe97f1303b0e15d9bb550c9e93c33b1eb8b278de964059092" id=4746ec6f-730f-446c-ac78-881525bc876e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.18657656Z" level=info msg="createCtr: deleting container 926d82129b2dd1bfe97f1303b0e15d9bb550c9e93c33b1eb8b278de964059092 from storage" id=4746ec6f-730f-446c-ac78-881525bc876e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:52:38 force-systemd-env-092546 crio[841]: time="2025-10-08T22:52:38.189139585Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-092546_kube-system_ce90a5c69d46be4cc86a49a6988a7244_0" id=4746ec6f-730f-446c-ac78-881525bc876e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 22:52:43.041794    2501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:43.042646    2501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:43.044369    2501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:43.044747    2501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:52:43.046398    2501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +3.771758] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:20] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:21] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:22] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:27] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:52:43 up  1:35,  0 user,  load average: 0.18, 1.21, 1.69
	Linux force-systemd-env-092546 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 08 22:52:31 force-systemd-env-092546 kubelet[1824]: E1008 22:52:31.225057    1824 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-092546\" not found"
	Oct 08 22:52:32 force-systemd-env-092546 kubelet[1824]: E1008 22:52:32.506617    1824 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.85.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 08 22:52:32 force-systemd-env-092546 kubelet[1824]: E1008 22:52:32.829322    1824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-092546.186ca5ae034812a7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-092546,UID:force-systemd-env-092546,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-092546 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-092546,},FirstTimestamp:2025-10-08 22:48:41.192968871 +0000 UTC m=+1.254732639,LastTimestamp:2025-10-08 22:48:41.192968871 +0000 UTC m=+1.254732639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-092546,}"
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]: E1008 22:52:34.165921    1824 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-092546\" not found" node="force-systemd-env-092546"
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]: E1008 22:52:34.193493    1824 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]:  > podSandboxID="1dcced078e79f4017d893b7b60ff2b9f50ac863cba4add40aba7b3d23dd56f07"
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]: E1008 22:52:34.193598    1824 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]:         container etcd start failed in pod etcd-force-systemd-env-092546_kube-system(c217f9c1a44a6a386045dcf288377cee): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]:  > logger="UnhandledError"
	Oct 08 22:52:34 force-systemd-env-092546 kubelet[1824]: E1008 22:52:34.194494    1824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-092546" podUID="c217f9c1a44a6a386045dcf288377cee"
	Oct 08 22:52:36 force-systemd-env-092546 kubelet[1824]: E1008 22:52:36.413759    1824 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.85.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dforce-systemd-env-092546&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 08 22:52:37 force-systemd-env-092546 kubelet[1824]: E1008 22:52:37.800532    1824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-092546?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 08 22:52:37 force-systemd-env-092546 kubelet[1824]: I1008 22:52:37.982795    1824 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-092546"
	Oct 08 22:52:37 force-systemd-env-092546 kubelet[1824]: E1008 22:52:37.983251    1824 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-env-092546"
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]: E1008 22:52:38.166643    1824 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-092546\" not found" node="force-systemd-env-092546"
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]: E1008 22:52:38.189459    1824 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]:  > podSandboxID="5a78e23887b791b1050f3e7eacbedca9dde9e6cfd430f8ed0f9fffa6889ddf7a"
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]: E1008 22:52:38.189567    1824 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-092546_kube-system(ce90a5c69d46be4cc86a49a6988a7244): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]:  > logger="UnhandledError"
	Oct 08 22:52:38 force-systemd-env-092546 kubelet[1824]: E1008 22:52:38.189611    1824 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-092546" podUID="ce90a5c69d46be4cc86a49a6988a7244"
	Oct 08 22:52:41 force-systemd-env-092546 kubelet[1824]: E1008 22:52:41.225608    1824 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-092546\" not found"
	Oct 08 22:52:42 force-systemd-env-092546 kubelet[1824]: E1008 22:52:42.830226    1824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-092546.186ca5ae034812a7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-092546,UID:force-systemd-env-092546,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-092546 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-092546,},FirstTimestamp:2025-10-08 22:48:41.192968871 +0000 UTC m=+1.254732639,LastTimestamp:2025-10-08 22:48:41.192968871 +0000 UTC m=+1.254732639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-092546,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-092546 -n force-systemd-env-092546
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-092546 -n force-systemd-env-092546: exit status 6 (310.909298ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 22:52:43.470765  174638 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-092546" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-092546" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-092546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-092546
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-092546: (2.004304676s)
--- FAIL: TestForceSystemdEnv (522.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-101115 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-101115 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-65qcv" [3866f1e9-62ed-4d2d-a647-61dfc501f265] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101115 -n functional-101115
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-08 22:11:50.064590805 +0000 UTC m=+1289.615859801
functional_test.go:1645: (dbg) Run:  kubectl --context functional-101115 describe po hello-node-connect-7d85dfc575-65qcv -n default
functional_test.go:1645: (dbg) kubectl --context functional-101115 describe po hello-node-connect-7d85dfc575-65qcv -n default:
Name:             hello-node-connect-7d85dfc575-65qcv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-101115/192.168.49.2
Start Time:       Wed, 08 Oct 2025 22:01:49 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8vv4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-x8vv4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-65qcv to functional-101115
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-101115 logs hello-node-connect-7d85dfc575-65qcv -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-101115 logs hello-node-connect-7d85dfc575-65qcv -n default: exit status 1 (99.483348ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-65qcv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-101115 logs hello-node-connect-7d85dfc575-65qcv -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-101115 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-65qcv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-101115/192.168.49.2
Start Time:       Wed, 08 Oct 2025 22:01:49 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8vv4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-x8vv4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-65qcv to functional-101115
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-101115 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-101115 logs -l app=hello-node-connect: exit status 1 (84.692057ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-65qcv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-101115 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-101115 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.112.111
IPs:                      10.106.112.111
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30827/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-101115
helpers_test.go:243: (dbg) docker inspect functional-101115:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1812a86a3135d5f80ffd27bd228c5b47569067ae04ac0815488d140531992c4c",
	        "Created": "2025-10-08T21:58:58.218296612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20033,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T21:58:58.297743422Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/1812a86a3135d5f80ffd27bd228c5b47569067ae04ac0815488d140531992c4c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1812a86a3135d5f80ffd27bd228c5b47569067ae04ac0815488d140531992c4c/hostname",
	        "HostsPath": "/var/lib/docker/containers/1812a86a3135d5f80ffd27bd228c5b47569067ae04ac0815488d140531992c4c/hosts",
	        "LogPath": "/var/lib/docker/containers/1812a86a3135d5f80ffd27bd228c5b47569067ae04ac0815488d140531992c4c/1812a86a3135d5f80ffd27bd228c5b47569067ae04ac0815488d140531992c4c-json.log",
	        "Name": "/functional-101115",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-101115:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-101115",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1812a86a3135d5f80ffd27bd228c5b47569067ae04ac0815488d140531992c4c",
	                "LowerDir": "/var/lib/docker/overlay2/9a7900433f7f7e1bcbd9fbdab72a2f8685fb62f16d9de3248f5da3cb011453f7-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a7900433f7f7e1bcbd9fbdab72a2f8685fb62f16d9de3248f5da3cb011453f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a7900433f7f7e1bcbd9fbdab72a2f8685fb62f16d9de3248f5da3cb011453f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a7900433f7f7e1bcbd9fbdab72a2f8685fb62f16d9de3248f5da3cb011453f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-101115",
	                "Source": "/var/lib/docker/volumes/functional-101115/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-101115",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-101115",
	                "name.minikube.sigs.k8s.io": "functional-101115",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08c197b224998a921400ac69ffa0c8d2cd55aae24cd59159d5b5ecad4fda41df",
	            "SandboxKey": "/var/run/docker/netns/08c197b22499",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-101115": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:63:f2:42:40:32",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed4ff05ee4b3b52a81afa6cd45d8191f9330a6ee0b1caa6967384dbab16f481b",
	                    "EndpointID": "20ec42da1280f8b9778e82ac521636047aac705b8965ea8889f8d3fde36ed05e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-101115",
	                        "1812a86a3135"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-101115 -n functional-101115
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 logs -n 25: (1.485088276s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-101115 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:00 UTC │ 08 Oct 25 22:00 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 22:00 UTC │ 08 Oct 25 22:00 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Oct 25 22:00 UTC │ 08 Oct 25 22:00 UTC │
	│ kubectl │ functional-101115 kubectl -- --context functional-101115 get pods                                                          │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:00 UTC │ 08 Oct 25 22:00 UTC │
	│ start   │ -p functional-101115 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:00 UTC │ 08 Oct 25 22:01 UTC │
	│ service │ invalid-svc -p functional-101115                                                                                           │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │                     │
	│ config  │ functional-101115 config unset cpus                                                                                        │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ cp      │ functional-101115 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ config  │ functional-101115 config get cpus                                                                                          │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │                     │
	│ config  │ functional-101115 config set cpus 2                                                                                        │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ config  │ functional-101115 config get cpus                                                                                          │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ ssh     │ functional-101115 ssh -n functional-101115 sudo cat /home/docker/cp-test.txt                                               │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ config  │ functional-101115 config unset cpus                                                                                        │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ config  │ functional-101115 config get cpus                                                                                          │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │                     │
	│ ssh     │ functional-101115 ssh echo hello                                                                                           │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ cp      │ functional-101115 cp functional-101115:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3929051007/001/cp-test.txt │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ ssh     │ functional-101115 ssh cat /etc/hostname                                                                                    │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ ssh     │ functional-101115 ssh -n functional-101115 sudo cat /home/docker/cp-test.txt                                               │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ tunnel  │ functional-101115 tunnel --alsologtostderr                                                                                 │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │                     │
	│ tunnel  │ functional-101115 tunnel --alsologtostderr                                                                                 │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │                     │
	│ cp      │ functional-101115 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ ssh     │ functional-101115 ssh -n functional-101115 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ tunnel  │ functional-101115 tunnel --alsologtostderr                                                                                 │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │                     │
	│ addons  │ functional-101115 addons list                                                                                              │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	│ addons  │ functional-101115 addons list -o json                                                                                      │ functional-101115 │ jenkins │ v1.37.0 │ 08 Oct 25 22:01 UTC │ 08 Oct 25 22:01 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:00:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:00:50.473573   24188 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:00:50.473760   24188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:00:50.473765   24188 out.go:374] Setting ErrFile to fd 2...
	I1008 22:00:50.473769   24188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:00:50.474035   24188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:00:50.474390   24188 out.go:368] Setting JSON to false
	I1008 22:00:50.475237   24188 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2601,"bootTime":1759958250,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:00:50.475294   24188 start.go:141] virtualization:  
	I1008 22:00:50.479236   24188 out.go:179] * [functional-101115] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:00:50.482461   24188 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:00:50.482551   24188 notify.go:220] Checking for updates...
	I1008 22:00:50.488390   24188 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:00:50.491341   24188 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:00:50.494233   24188 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:00:50.497730   24188 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:00:50.500618   24188 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:00:50.504067   24188 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:00:50.504170   24188 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:00:50.535839   24188 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:00:50.535944   24188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:00:50.594308   24188 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-08 22:00:50.58458926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:00:50.594408   24188 docker.go:318] overlay module found
	I1008 22:00:50.599446   24188 out.go:179] * Using the docker driver based on existing profile
	I1008 22:00:50.602392   24188 start.go:305] selected driver: docker
	I1008 22:00:50.602401   24188 start.go:925] validating driver "docker" against &{Name:functional-101115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-101115 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:00:50.602503   24188 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:00:50.602616   24188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:00:50.666868   24188 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-08 22:00:50.657140821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:00:50.667319   24188 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:00:50.667348   24188 cni.go:84] Creating CNI manager for ""
	I1008 22:00:50.667402   24188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:00:50.667447   24188 start.go:349] cluster config:
	{Name:functional-101115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-101115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:00:50.672525   24188 out.go:179] * Starting "functional-101115" primary control-plane node in "functional-101115" cluster
	I1008 22:00:50.675361   24188 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:00:50.678325   24188 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:00:50.681171   24188 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:00:50.681221   24188 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 22:00:50.681245   24188 cache.go:58] Caching tarball of preloaded images
	I1008 22:00:50.681260   24188 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:00:50.681341   24188 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:00:50.681350   24188 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 22:00:50.681472   24188 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/config.json ...
	I1008 22:00:50.701474   24188 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:00:50.701486   24188 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:00:50.701498   24188 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:00:50.701519   24188 start.go:360] acquireMachinesLock for functional-101115: {Name:mkb4dfadd1499ee452c33ccf336092d65e208502 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:00:50.701573   24188 start.go:364] duration metric: took 38.097µs to acquireMachinesLock for "functional-101115"
	I1008 22:00:50.701591   24188 start.go:96] Skipping create...Using existing machine configuration
	I1008 22:00:50.701605   24188 fix.go:54] fixHost starting: 
	I1008 22:00:50.701884   24188 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
	I1008 22:00:50.719554   24188 fix.go:112] recreateIfNeeded on functional-101115: state=Running err=<nil>
	W1008 22:00:50.719572   24188 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 22:00:50.722747   24188 out.go:252] * Updating the running docker "functional-101115" container ...
	I1008 22:00:50.722787   24188 machine.go:93] provisionDockerMachine start ...
	I1008 22:00:50.722862   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:50.739836   24188 main.go:141] libmachine: Using SSH client type: native
	I1008 22:00:50.740211   24188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 22:00:50.740219   24188 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:00:50.885500   24188 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-101115
	
	I1008 22:00:50.885514   24188 ubuntu.go:182] provisioning hostname "functional-101115"
	I1008 22:00:50.885575   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:50.905484   24188 main.go:141] libmachine: Using SSH client type: native
	I1008 22:00:50.905992   24188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 22:00:50.906004   24188 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-101115 && echo "functional-101115" | sudo tee /etc/hostname
	I1008 22:00:51.067239   24188 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-101115
	
	I1008 22:00:51.067317   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:51.086186   24188 main.go:141] libmachine: Using SSH client type: native
	I1008 22:00:51.086493   24188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 22:00:51.086507   24188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-101115' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101115/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-101115' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:00:51.234344   24188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:00:51.234360   24188 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:00:51.234381   24188 ubuntu.go:190] setting up certificates
	I1008 22:00:51.234389   24188 provision.go:84] configureAuth start
	I1008 22:00:51.234455   24188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101115
	I1008 22:00:51.252387   24188 provision.go:143] copyHostCerts
	I1008 22:00:51.252445   24188 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:00:51.252452   24188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:00:51.252536   24188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:00:51.252628   24188 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:00:51.252633   24188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:00:51.252657   24188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:00:51.252704   24188 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:00:51.252707   24188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:00:51.252733   24188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:00:51.252781   24188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.functional-101115 san=[127.0.0.1 192.168.49.2 functional-101115 localhost minikube]
	I1008 22:00:51.397094   24188 provision.go:177] copyRemoteCerts
	I1008 22:00:51.397155   24188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:00:51.397193   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:51.418308   24188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:00:51.521854   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 22:00:51.540027   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:00:51.559437   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 22:00:51.578013   24188 provision.go:87] duration metric: took 343.600396ms to configureAuth
	I1008 22:00:51.578029   24188 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:00:51.578237   24188 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:00:51.578380   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:51.596471   24188 main.go:141] libmachine: Using SSH client type: native
	I1008 22:00:51.596822   24188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1008 22:00:51.596852   24188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:00:56.976141   24188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:00:56.976153   24188 machine.go:96] duration metric: took 6.253359359s to provisionDockerMachine
	I1008 22:00:56.976163   24188 start.go:293] postStartSetup for "functional-101115" (driver="docker")
	I1008 22:00:56.976173   24188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:00:56.976232   24188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:00:56.976281   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:56.995573   24188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:00:57.097619   24188 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:00:57.101204   24188 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:00:57.101222   24188 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:00:57.101232   24188 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:00:57.101299   24188 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:00:57.101374   24188 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:00:57.101453   24188 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/test/nested/copy/4286/hosts -> hosts in /etc/test/nested/copy/4286
	I1008 22:00:57.101496   24188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4286
	I1008 22:00:57.109289   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:00:57.128077   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/test/nested/copy/4286/hosts --> /etc/test/nested/copy/4286/hosts (40 bytes)
	I1008 22:00:57.146461   24188 start.go:296] duration metric: took 170.283792ms for postStartSetup
	I1008 22:00:57.146529   24188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:00:57.146583   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:57.164157   24188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:00:57.263075   24188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:00:57.268089   24188 fix.go:56] duration metric: took 6.566487476s for fixHost
	I1008 22:00:57.268104   24188 start.go:83] releasing machines lock for "functional-101115", held for 6.566524145s
	I1008 22:00:57.268169   24188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101115
	I1008 22:00:57.286375   24188 ssh_runner.go:195] Run: cat /version.json
	I1008 22:00:57.286421   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:57.286733   24188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:00:57.286776   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:00:57.304536   24188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:00:57.306548   24188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:00:57.401807   24188 ssh_runner.go:195] Run: systemctl --version
	I1008 22:00:57.498351   24188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:00:57.538132   24188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:00:57.542657   24188 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:00:57.542736   24188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:00:57.550569   24188 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 22:00:57.550583   24188 start.go:495] detecting cgroup driver to use...
	I1008 22:00:57.550613   24188 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:00:57.550657   24188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:00:57.565952   24188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:00:57.579432   24188 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:00:57.579498   24188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:00:57.595287   24188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:00:57.609765   24188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:00:57.766108   24188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:00:57.905954   24188 docker.go:234] disabling docker service ...
	I1008 22:00:57.906013   24188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:00:57.921715   24188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:00:57.935326   24188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:00:58.071707   24188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:00:58.209983   24188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:00:58.223906   24188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:00:58.240502   24188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:00:58.240564   24188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:00:58.250983   24188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:00:58.251059   24188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:00:58.260167   24188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:00:58.269023   24188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:00:58.277881   24188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:00:58.286064   24188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:00:58.295116   24188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:00:58.303604   24188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:00:58.312491   24188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:00:58.319953   24188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:00:58.328189   24188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:00:58.465092   24188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:01:04.397696   24188 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.932579828s)
	I1008 22:01:04.397718   24188 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:01:04.397786   24188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:01:04.402085   24188 start.go:563] Will wait 60s for crictl version
	I1008 22:01:04.402141   24188 ssh_runner.go:195] Run: which crictl
	I1008 22:01:04.406090   24188 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:01:04.432160   24188 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:01:04.432245   24188 ssh_runner.go:195] Run: crio --version
	I1008 22:01:04.463845   24188 ssh_runner.go:195] Run: crio --version
	I1008 22:01:04.495081   24188 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:01:04.497939   24188 cli_runner.go:164] Run: docker network inspect functional-101115 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:01:04.514142   24188 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1008 22:01:04.521529   24188 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1008 22:01:04.524466   24188 kubeadm.go:883] updating cluster {Name:functional-101115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-101115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:01:04.524623   24188 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:01:04.524696   24188 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:01:04.558499   24188 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:01:04.558514   24188 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:01:04.558591   24188 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:01:04.584495   24188 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:01:04.584507   24188 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:01:04.584514   24188 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1008 22:01:04.584615   24188 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-101115 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-101115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:01:04.584695   24188 ssh_runner.go:195] Run: crio config
	I1008 22:01:04.653995   24188 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1008 22:01:04.654020   24188 cni.go:84] Creating CNI manager for ""
	I1008 22:01:04.654029   24188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:01:04.654042   24188 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:01:04.654067   24188 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101115 NodeName:functional-101115 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:01:04.654187   24188 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-101115"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:01:04.654259   24188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:01:04.662611   24188 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:01:04.662672   24188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:01:04.670652   24188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 22:01:04.684316   24188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:01:04.698677   24188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1008 22:01:04.712050   24188 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:01:04.716345   24188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:01:04.865482   24188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:01:04.880128   24188 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115 for IP: 192.168.49.2
	I1008 22:01:04.880138   24188 certs.go:195] generating shared ca certs ...
	I1008 22:01:04.880151   24188 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:01:04.880316   24188 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:01:04.880355   24188 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:01:04.880361   24188 certs.go:257] generating profile certs ...
	I1008 22:01:04.880438   24188 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.key
	I1008 22:01:04.880484   24188 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/apiserver.key.30431ce7
	I1008 22:01:04.880524   24188 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/proxy-client.key
	I1008 22:01:04.880627   24188 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:01:04.880655   24188 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:01:04.880668   24188 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:01:04.880693   24188 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:01:04.880716   24188 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:01:04.880735   24188 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:01:04.880775   24188 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:01:04.881413   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:01:04.900182   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:01:04.918221   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:01:04.936156   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:01:04.954785   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 22:01:04.973242   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:01:04.992110   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:01:05.012712   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 22:01:05.033990   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:01:05.052797   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:01:05.070868   24188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:01:05.089731   24188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:01:05.103410   24188 ssh_runner.go:195] Run: openssl version
	I1008 22:01:05.111007   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:01:05.119749   24188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:01:05.123454   24188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:01:05.123506   24188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:01:05.165139   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:01:05.173390   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:01:05.182161   24188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:01:05.186250   24188 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:01:05.186310   24188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:01:05.228155   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:01:05.236620   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:01:05.245583   24188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:01:05.249524   24188 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:01:05.249592   24188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:01:05.291203   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:01:05.299457   24188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:01:05.303506   24188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 22:01:05.345194   24188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 22:01:05.386638   24188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 22:01:05.428398   24188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 22:01:05.469743   24188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 22:01:05.516960   24188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 22:01:05.558416   24188 kubeadm.go:400] StartCluster: {Name:functional-101115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-101115 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:01:05.558496   24188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:01:05.558564   24188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:01:05.587196   24188 cri.go:89] found id: "e42c1c5566f542746f692fff5798308a903a10d1f7179f6f82f6b3b53b02c66e"
	I1008 22:01:05.587207   24188 cri.go:89] found id: "1a8769f8928866d833b01aafcca4f2c98b6e8bfd2a52837549ea904513e5192d"
	I1008 22:01:05.587211   24188 cri.go:89] found id: "786cddf60498b08d99303f6076c7112660eeb54f9536cc4aaf0b3e0076807766"
	I1008 22:01:05.587213   24188 cri.go:89] found id: "293530aac1293b36df988caa4e5e00dec14f3fbadc02909ad4db45a998f5f1ce"
	I1008 22:01:05.587216   24188 cri.go:89] found id: "7e4825694c84741fddbed0602f9a6d4b9e4acd50fe1ccbfc03c3092d2dffdb24"
	I1008 22:01:05.587219   24188 cri.go:89] found id: "0fd813dcdfd345398c2dd607ca345fee7568485429128af36017a534ca75bb9e"
	I1008 22:01:05.587221   24188 cri.go:89] found id: "f8f2adf8f0e0a76b90ad046fd4a27eae8eeefb22cf8fdf2546b89d78cc1fe8e7"
	I1008 22:01:05.587223   24188 cri.go:89] found id: "2232c0782cc2670db1f177c47e934690ad2d63bb0a469f26a1d62b7a8e12bd68"
	I1008 22:01:05.587230   24188 cri.go:89] found id: "a92c9503cd2b013a548d24d53187f2d110889ec243ba06d358d90ef07298144e"
	I1008 22:01:05.587235   24188 cri.go:89] found id: "2e314a22ee865c533f7ed6348607916c0b722ccebd0e9512c66839406d5a1ae6"
	I1008 22:01:05.587237   24188 cri.go:89] found id: "ba2a71cbfbc46100a72bd39bc25f158e2298d2f0861ef462bd565ef8258a9b77"
	I1008 22:01:05.587239   24188 cri.go:89] found id: "bdf2c23fc0ff5281403796665e424f093353daa652dee7bfe0bc8c8175845f68"
	I1008 22:01:05.587241   24188 cri.go:89] found id: "d6d670ba9860e8ab37622bd3e2571c1c79afe8df2dd3fe38322d4356f59247f0"
	I1008 22:01:05.587243   24188 cri.go:89] found id: "396715c268984f0cda9062c991602b72f67a2b77624feaeb0cfafc474969f08c"
	I1008 22:01:05.587245   24188 cri.go:89] found id: "772816a160e37bedfe924185ee3684243bb2f20899bd0f66870f3e6f3a033f9c"
	I1008 22:01:05.587249   24188 cri.go:89] found id: "83953e99d7acdc145c076cde21eb9a601c4b6334b547c82c7c2f887725729e16"
	I1008 22:01:05.587251   24188 cri.go:89] found id: ""
	I1008 22:01:05.587298   24188 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 22:01:05.598352   24188 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:01:05Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:01:05.598418   24188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:01:05.606252   24188 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 22:01:05.606262   24188 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 22:01:05.606312   24188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 22:01:05.614049   24188 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:01:05.614579   24188 kubeconfig.go:125] found "functional-101115" server: "https://192.168.49.2:8441"
	I1008 22:01:05.615895   24188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 22:01:05.623651   24188 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-08 21:59:07.716626175 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-08 22:01:04.705670956 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1008 22:01:05.623667   24188 kubeadm.go:1160] stopping kube-system containers ...
	I1008 22:01:05.623678   24188 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1008 22:01:05.623731   24188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:01:05.660421   24188 cri.go:89] found id: "e42c1c5566f542746f692fff5798308a903a10d1f7179f6f82f6b3b53b02c66e"
	I1008 22:01:05.660432   24188 cri.go:89] found id: "1a8769f8928866d833b01aafcca4f2c98b6e8bfd2a52837549ea904513e5192d"
	I1008 22:01:05.660435   24188 cri.go:89] found id: "786cddf60498b08d99303f6076c7112660eeb54f9536cc4aaf0b3e0076807766"
	I1008 22:01:05.660437   24188 cri.go:89] found id: "293530aac1293b36df988caa4e5e00dec14f3fbadc02909ad4db45a998f5f1ce"
	I1008 22:01:05.660440   24188 cri.go:89] found id: "7e4825694c84741fddbed0602f9a6d4b9e4acd50fe1ccbfc03c3092d2dffdb24"
	I1008 22:01:05.660443   24188 cri.go:89] found id: "0fd813dcdfd345398c2dd607ca345fee7568485429128af36017a534ca75bb9e"
	I1008 22:01:05.660445   24188 cri.go:89] found id: "f8f2adf8f0e0a76b90ad046fd4a27eae8eeefb22cf8fdf2546b89d78cc1fe8e7"
	I1008 22:01:05.660447   24188 cri.go:89] found id: "2232c0782cc2670db1f177c47e934690ad2d63bb0a469f26a1d62b7a8e12bd68"
	I1008 22:01:05.660449   24188 cri.go:89] found id: "a92c9503cd2b013a548d24d53187f2d110889ec243ba06d358d90ef07298144e"
	I1008 22:01:05.660455   24188 cri.go:89] found id: "2e314a22ee865c533f7ed6348607916c0b722ccebd0e9512c66839406d5a1ae6"
	I1008 22:01:05.660457   24188 cri.go:89] found id: "ba2a71cbfbc46100a72bd39bc25f158e2298d2f0861ef462bd565ef8258a9b77"
	I1008 22:01:05.660459   24188 cri.go:89] found id: "bdf2c23fc0ff5281403796665e424f093353daa652dee7bfe0bc8c8175845f68"
	I1008 22:01:05.660461   24188 cri.go:89] found id: "d6d670ba9860e8ab37622bd3e2571c1c79afe8df2dd3fe38322d4356f59247f0"
	I1008 22:01:05.660463   24188 cri.go:89] found id: "396715c268984f0cda9062c991602b72f67a2b77624feaeb0cfafc474969f08c"
	I1008 22:01:05.660466   24188 cri.go:89] found id: "772816a160e37bedfe924185ee3684243bb2f20899bd0f66870f3e6f3a033f9c"
	I1008 22:01:05.660469   24188 cri.go:89] found id: "83953e99d7acdc145c076cde21eb9a601c4b6334b547c82c7c2f887725729e16"
	I1008 22:01:05.660472   24188 cri.go:89] found id: ""
	I1008 22:01:05.660476   24188 cri.go:252] Stopping containers: [e42c1c5566f542746f692fff5798308a903a10d1f7179f6f82f6b3b53b02c66e 1a8769f8928866d833b01aafcca4f2c98b6e8bfd2a52837549ea904513e5192d 786cddf60498b08d99303f6076c7112660eeb54f9536cc4aaf0b3e0076807766 293530aac1293b36df988caa4e5e00dec14f3fbadc02909ad4db45a998f5f1ce 7e4825694c84741fddbed0602f9a6d4b9e4acd50fe1ccbfc03c3092d2dffdb24 0fd813dcdfd345398c2dd607ca345fee7568485429128af36017a534ca75bb9e f8f2adf8f0e0a76b90ad046fd4a27eae8eeefb22cf8fdf2546b89d78cc1fe8e7 2232c0782cc2670db1f177c47e934690ad2d63bb0a469f26a1d62b7a8e12bd68 a92c9503cd2b013a548d24d53187f2d110889ec243ba06d358d90ef07298144e 2e314a22ee865c533f7ed6348607916c0b722ccebd0e9512c66839406d5a1ae6 ba2a71cbfbc46100a72bd39bc25f158e2298d2f0861ef462bd565ef8258a9b77 bdf2c23fc0ff5281403796665e424f093353daa652dee7bfe0bc8c8175845f68 d6d670ba9860e8ab37622bd3e2571c1c79afe8df2dd3fe38322d4356f59247f0 396715c268984f0cda9062c991602b72f67a2b77624feaeb0cfafc474969f08c 772816a160e37bedfe924185ee3684243bb2f2089
9bd0f66870f3e6f3a033f9c 83953e99d7acdc145c076cde21eb9a601c4b6334b547c82c7c2f887725729e16]
	I1008 22:01:05.660546   24188 ssh_runner.go:195] Run: which crictl
	I1008 22:01:05.664121   24188 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 e42c1c5566f542746f692fff5798308a903a10d1f7179f6f82f6b3b53b02c66e 1a8769f8928866d833b01aafcca4f2c98b6e8bfd2a52837549ea904513e5192d 786cddf60498b08d99303f6076c7112660eeb54f9536cc4aaf0b3e0076807766 293530aac1293b36df988caa4e5e00dec14f3fbadc02909ad4db45a998f5f1ce 7e4825694c84741fddbed0602f9a6d4b9e4acd50fe1ccbfc03c3092d2dffdb24 0fd813dcdfd345398c2dd607ca345fee7568485429128af36017a534ca75bb9e f8f2adf8f0e0a76b90ad046fd4a27eae8eeefb22cf8fdf2546b89d78cc1fe8e7 2232c0782cc2670db1f177c47e934690ad2d63bb0a469f26a1d62b7a8e12bd68 a92c9503cd2b013a548d24d53187f2d110889ec243ba06d358d90ef07298144e 2e314a22ee865c533f7ed6348607916c0b722ccebd0e9512c66839406d5a1ae6 ba2a71cbfbc46100a72bd39bc25f158e2298d2f0861ef462bd565ef8258a9b77 bdf2c23fc0ff5281403796665e424f093353daa652dee7bfe0bc8c8175845f68 d6d670ba9860e8ab37622bd3e2571c1c79afe8df2dd3fe38322d4356f59247f0 396715c268984f0cda9062c991602b72f67a2b77624feaeb0cfafc474969f08c 772816
a160e37bedfe924185ee3684243bb2f20899bd0f66870f3e6f3a033f9c 83953e99d7acdc145c076cde21eb9a601c4b6334b547c82c7c2f887725729e16
	I1008 22:01:05.767775   24188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 22:01:05.876840   24188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:01:05.885041   24188 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  8 21:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  8 21:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  8 21:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  8 21:59 /etc/kubernetes/scheduler.conf
	
	I1008 22:01:05.885096   24188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1008 22:01:05.893436   24188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1008 22:01:05.901142   24188 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:01:05.901198   24188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:01:05.909062   24188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1008 22:01:05.917158   24188 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:01:05.917209   24188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:01:05.925122   24188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1008 22:01:05.932876   24188 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:01:05.932928   24188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:01:05.941454   24188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:01:05.950153   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 22:01:06.000721   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 22:01:08.121854   24188 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.121094848s)
	I1008 22:01:08.121920   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 22:01:08.327719   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 22:01:08.406412   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 22:01:08.478278   24188 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:01:08.478345   24188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:01:08.978511   24188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:01:09.478456   24188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:01:09.499444   24188 api_server.go:72] duration metric: took 1.021176976s to wait for apiserver process to appear ...
	I1008 22:01:09.499458   24188 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:01:09.499475   24188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1008 22:01:12.367600   24188 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 22:01:12.367617   24188 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 22:01:12.367628   24188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1008 22:01:12.406499   24188 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1008 22:01:12.406516   24188 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1008 22:01:12.499769   24188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1008 22:01:12.558618   24188 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 22:01:12.558655   24188 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 22:01:13.000200   24188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1008 22:01:13.021378   24188 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 22:01:13.021400   24188 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 22:01:13.500313   24188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1008 22:01:13.510279   24188 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 22:01:13.510297   24188 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 22:01:13.999879   24188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1008 22:01:14.011599   24188 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1008 22:01:14.026188   24188 api_server.go:141] control plane version: v1.34.1
	I1008 22:01:14.026218   24188 api_server.go:131] duration metric: took 4.526754333s to wait for apiserver health ...
	I1008 22:01:14.026226   24188 cni.go:84] Creating CNI manager for ""
	I1008 22:01:14.026232   24188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:01:14.029527   24188 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 22:01:14.032575   24188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 22:01:14.036950   24188 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 22:01:14.036968   24188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 22:01:14.052908   24188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 22:01:14.626292   24188 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:01:14.632025   24188 system_pods.go:59] 8 kube-system pods found
	I1008 22:01:14.632052   24188 system_pods.go:61] "coredns-66bc5c9577-wxkhn" [8e8d6276-e7c4-4be7-ad1b-666690d3e875] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:01:14.632061   24188 system_pods.go:61] "etcd-functional-101115" [4605d138-60a8-4f5e-8239-2b7ae273c68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:01:14.632065   24188 system_pods.go:61] "kindnet-vs4xl" [cfa9ce3b-4bb3-4a4b-946f-f1d7e0ad8984] Running
	I1008 22:01:14.632090   24188 system_pods.go:61] "kube-apiserver-functional-101115" [c3b2a63d-1410-4c3e-a180-47e6df8cf2eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:01:14.632097   24188 system_pods.go:61] "kube-controller-manager-functional-101115" [2e350b3e-d201-46d4-8e89-5664e48ee16e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:01:14.632102   24188 system_pods.go:61] "kube-proxy-zl9vj" [ba335d6b-7de4-4b94-8bd0-7589534b1ee5] Running
	I1008 22:01:14.632108   24188 system_pods.go:61] "kube-scheduler-functional-101115" [40fd3e4a-36fe-4f1f-96be-fb9c50555930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:01:14.632111   24188 system_pods.go:61] "storage-provisioner" [b0884d04-4b82-4a92-a494-a0c2fc833c3e] Running
	I1008 22:01:14.632117   24188 system_pods.go:74] duration metric: took 5.81443ms to wait for pod list to return data ...
	I1008 22:01:14.632124   24188 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:01:14.639021   24188 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:01:14.639052   24188 node_conditions.go:123] node cpu capacity is 2
	I1008 22:01:14.639065   24188 node_conditions.go:105] duration metric: took 6.937271ms to run NodePressure ...
	I1008 22:01:14.639140   24188 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 22:01:14.902965   24188 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1008 22:01:14.906214   24188 kubeadm.go:743] kubelet initialised
	I1008 22:01:14.906226   24188 kubeadm.go:744] duration metric: took 3.247456ms waiting for restarted kubelet to initialise ...
	I1008 22:01:14.906247   24188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 22:01:14.915373   24188 ops.go:34] apiserver oom_adj: -16
	I1008 22:01:14.915383   24188 kubeadm.go:601] duration metric: took 9.309116381s to restartPrimaryControlPlane
	I1008 22:01:14.915391   24188 kubeadm.go:402] duration metric: took 9.356983067s to StartCluster
	I1008 22:01:14.915405   24188 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:01:14.915466   24188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:01:14.916138   24188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:01:14.916339   24188 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:01:14.916574   24188 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:01:14.916603   24188 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:01:14.916659   24188 addons.go:69] Setting storage-provisioner=true in profile "functional-101115"
	I1008 22:01:14.916676   24188 addons.go:238] Setting addon storage-provisioner=true in "functional-101115"
	W1008 22:01:14.916681   24188 addons.go:247] addon storage-provisioner should already be in state true
	I1008 22:01:14.916699   24188 host.go:66] Checking if "functional-101115" exists ...
	I1008 22:01:14.917127   24188 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
	I1008 22:01:14.917395   24188 addons.go:69] Setting default-storageclass=true in profile "functional-101115"
	I1008 22:01:14.917408   24188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-101115"
	I1008 22:01:14.917692   24188 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
	I1008 22:01:14.919744   24188 out.go:179] * Verifying Kubernetes components...
	I1008 22:01:14.922835   24188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:01:14.948717   24188 addons.go:238] Setting addon default-storageclass=true in "functional-101115"
	W1008 22:01:14.948728   24188 addons.go:247] addon default-storageclass should already be in state true
	I1008 22:01:14.948749   24188 host.go:66] Checking if "functional-101115" exists ...
	I1008 22:01:14.949149   24188 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
	I1008 22:01:14.965158   24188 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:01:14.967968   24188 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:01:14.967980   24188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:01:14.968050   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:01:14.968814   24188 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:01:14.968822   24188 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:01:14.968867   24188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:01:15.003638   24188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:01:15.020144   24188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:01:15.144737   24188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:01:15.160395   24188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:01:15.186832   24188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:01:15.973077   24188 node_ready.go:35] waiting up to 6m0s for node "functional-101115" to be "Ready" ...
	I1008 22:01:15.976263   24188 node_ready.go:49] node "functional-101115" is "Ready"
	I1008 22:01:15.976290   24188 node_ready.go:38] duration metric: took 3.196403ms for node "functional-101115" to be "Ready" ...
	I1008 22:01:15.976302   24188 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:01:15.976369   24188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:01:15.984386   24188 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 22:01:15.987240   24188 addons.go:514] duration metric: took 1.070612653s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 22:01:15.991720   24188 api_server.go:72] duration metric: took 1.075356033s to wait for apiserver process to appear ...
	I1008 22:01:15.991734   24188 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:01:15.991751   24188 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1008 22:01:16.002796   24188 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1008 22:01:16.004713   24188 api_server.go:141] control plane version: v1.34.1
	I1008 22:01:16.004732   24188 api_server.go:131] duration metric: took 12.992343ms to wait for apiserver health ...
	I1008 22:01:16.004741   24188 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:01:16.010455   24188 system_pods.go:59] 8 kube-system pods found
	I1008 22:01:16.010477   24188 system_pods.go:61] "coredns-66bc5c9577-wxkhn" [8e8d6276-e7c4-4be7-ad1b-666690d3e875] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:01:16.010484   24188 system_pods.go:61] "etcd-functional-101115" [4605d138-60a8-4f5e-8239-2b7ae273c68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:01:16.010489   24188 system_pods.go:61] "kindnet-vs4xl" [cfa9ce3b-4bb3-4a4b-946f-f1d7e0ad8984] Running
	I1008 22:01:16.010497   24188 system_pods.go:61] "kube-apiserver-functional-101115" [c3b2a63d-1410-4c3e-a180-47e6df8cf2eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:01:16.010510   24188 system_pods.go:61] "kube-controller-manager-functional-101115" [2e350b3e-d201-46d4-8e89-5664e48ee16e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:01:16.010515   24188 system_pods.go:61] "kube-proxy-zl9vj" [ba335d6b-7de4-4b94-8bd0-7589534b1ee5] Running
	I1008 22:01:16.010520   24188 system_pods.go:61] "kube-scheduler-functional-101115" [40fd3e4a-36fe-4f1f-96be-fb9c50555930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:01:16.010524   24188 system_pods.go:61] "storage-provisioner" [b0884d04-4b82-4a92-a494-a0c2fc833c3e] Running
	I1008 22:01:16.010530   24188 system_pods.go:74] duration metric: took 5.783061ms to wait for pod list to return data ...
	I1008 22:01:16.010537   24188 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:01:16.013823   24188 default_sa.go:45] found service account: "default"
	I1008 22:01:16.013837   24188 default_sa.go:55] duration metric: took 3.295153ms for default service account to be created ...
	I1008 22:01:16.013845   24188 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:01:16.016961   24188 system_pods.go:86] 8 kube-system pods found
	I1008 22:01:16.016980   24188 system_pods.go:89] "coredns-66bc5c9577-wxkhn" [8e8d6276-e7c4-4be7-ad1b-666690d3e875] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:01:16.016993   24188 system_pods.go:89] "etcd-functional-101115" [4605d138-60a8-4f5e-8239-2b7ae273c68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:01:16.016997   24188 system_pods.go:89] "kindnet-vs4xl" [cfa9ce3b-4bb3-4a4b-946f-f1d7e0ad8984] Running
	I1008 22:01:16.017003   24188 system_pods.go:89] "kube-apiserver-functional-101115" [c3b2a63d-1410-4c3e-a180-47e6df8cf2eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:01:16.017008   24188 system_pods.go:89] "kube-controller-manager-functional-101115" [2e350b3e-d201-46d4-8e89-5664e48ee16e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:01:16.017012   24188 system_pods.go:89] "kube-proxy-zl9vj" [ba335d6b-7de4-4b94-8bd0-7589534b1ee5] Running
	I1008 22:01:16.017016   24188 system_pods.go:89] "kube-scheduler-functional-101115" [40fd3e4a-36fe-4f1f-96be-fb9c50555930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:01:16.017019   24188 system_pods.go:89] "storage-provisioner" [b0884d04-4b82-4a92-a494-a0c2fc833c3e] Running
	I1008 22:01:16.017026   24188 system_pods.go:126] duration metric: took 3.175021ms to wait for k8s-apps to be running ...
	I1008 22:01:16.017034   24188 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:01:16.017090   24188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:01:16.034456   24188 system_svc.go:56] duration metric: took 17.411511ms WaitForService to wait for kubelet
	I1008 22:01:16.034473   24188 kubeadm.go:586] duration metric: took 1.118114948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:01:16.034491   24188 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:01:16.038361   24188 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:01:16.038377   24188 node_conditions.go:123] node cpu capacity is 2
	I1008 22:01:16.038387   24188 node_conditions.go:105] duration metric: took 3.890432ms to run NodePressure ...
	I1008 22:01:16.038398   24188 start.go:241] waiting for startup goroutines ...
	I1008 22:01:16.038404   24188 start.go:246] waiting for cluster config update ...
	I1008 22:01:16.038421   24188 start.go:255] writing updated cluster config ...
	I1008 22:01:16.038722   24188 ssh_runner.go:195] Run: rm -f paused
	I1008 22:01:16.042541   24188 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:01:16.046092   24188 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wxkhn" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 22:01:18.051668   24188 pod_ready.go:104] pod "coredns-66bc5c9577-wxkhn" is not "Ready", error: <nil>
	W1008 22:01:20.051842   24188 pod_ready.go:104] pod "coredns-66bc5c9577-wxkhn" is not "Ready", error: <nil>
	W1008 22:01:22.052315   24188 pod_ready.go:104] pod "coredns-66bc5c9577-wxkhn" is not "Ready", error: <nil>
	I1008 22:01:22.552242   24188 pod_ready.go:94] pod "coredns-66bc5c9577-wxkhn" is "Ready"
	I1008 22:01:22.552257   24188 pod_ready.go:86] duration metric: took 6.506152453s for pod "coredns-66bc5c9577-wxkhn" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:22.554984   24188 pod_ready.go:83] waiting for pod "etcd-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:22.559647   24188 pod_ready.go:94] pod "etcd-functional-101115" is "Ready"
	I1008 22:01:22.559661   24188 pod_ready.go:86] duration metric: took 4.664479ms for pod "etcd-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:22.562063   24188 pod_ready.go:83] waiting for pod "kube-apiserver-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 22:01:24.568235   24188 pod_ready.go:104] pod "kube-apiserver-functional-101115" is not "Ready", error: <nil>
	W1008 22:01:27.067878   24188 pod_ready.go:104] pod "kube-apiserver-functional-101115" is not "Ready", error: <nil>
	I1008 22:01:28.567275   24188 pod_ready.go:94] pod "kube-apiserver-functional-101115" is "Ready"
	I1008 22:01:28.567290   24188 pod_ready.go:86] duration metric: took 6.005214344s for pod "kube-apiserver-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:28.569551   24188 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:28.574071   24188 pod_ready.go:94] pod "kube-controller-manager-functional-101115" is "Ready"
	I1008 22:01:28.574085   24188 pod_ready.go:86] duration metric: took 4.520945ms for pod "kube-controller-manager-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:28.576354   24188 pod_ready.go:83] waiting for pod "kube-proxy-zl9vj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:28.580934   24188 pod_ready.go:94] pod "kube-proxy-zl9vj" is "Ready"
	I1008 22:01:28.580949   24188 pod_ready.go:86] duration metric: took 4.583034ms for pod "kube-proxy-zl9vj" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:28.583235   24188 pod_ready.go:83] waiting for pod "kube-scheduler-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:28.765298   24188 pod_ready.go:94] pod "kube-scheduler-functional-101115" is "Ready"
	I1008 22:01:28.765313   24188 pod_ready.go:86] duration metric: took 182.065505ms for pod "kube-scheduler-functional-101115" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:01:28.765325   24188 pod_ready.go:40] duration metric: took 12.722761251s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:01:28.818634   24188 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:01:28.821866   24188 out.go:179] * Done! kubectl is now configured to use "functional-101115" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:02:05 functional-101115 crio[3522]: time="2025-10-08T22:02:05.580627565Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-tn5t4 Namespace:default ID:cd2bb5ba6c25cd6ebc6a6b2f15c4478d31a111485623bc11ae6e871936da8d5a UID:a4cf3e43-416b-4c92-846e-5a21d0d5df5c NetNS:/var/run/netns/62bd1d63-6a98-461b-838b-9b0749b2e2c6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40006e8b90}] Aliases:map[]}"
	Oct 08 22:02:05 functional-101115 crio[3522]: time="2025-10-08T22:02:05.580779509Z" level=info msg="Checking pod default_hello-node-75c85bcc94-tn5t4 for CNI network kindnet (type=ptp)"
	Oct 08 22:02:05 functional-101115 crio[3522]: time="2025-10-08T22:02:05.583804153Z" level=info msg="Ran pod sandbox cd2bb5ba6c25cd6ebc6a6b2f15c4478d31a111485623bc11ae6e871936da8d5a with infra container: default/hello-node-75c85bcc94-tn5t4/POD" id=fdd2e9e5-6316-4dea-97a9-828e4b88ef03 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:02:05 functional-101115 crio[3522]: time="2025-10-08T22:02:05.586381442Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=77ae0c97-dc59-4c00-a654-b17841afdcf8 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.619684304Z" level=info msg="Stopping pod sandbox: 1b4d4acfd68f35d8db3a9a6d80cc15c5d7f0fbd4ccb1377e7a8a3c6e0dd92841" id=df0e2b33-1d4c-4cbb-a8b2-19b4d28ae7e1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.619755164Z" level=info msg="Stopped pod sandbox (already stopped): 1b4d4acfd68f35d8db3a9a6d80cc15c5d7f0fbd4ccb1377e7a8a3c6e0dd92841" id=df0e2b33-1d4c-4cbb-a8b2-19b4d28ae7e1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.620171414Z" level=info msg="Removing pod sandbox: 1b4d4acfd68f35d8db3a9a6d80cc15c5d7f0fbd4ccb1377e7a8a3c6e0dd92841" id=b33c5b74-5b4a-44ab-81fd-55e161ac18b0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.624598083Z" level=info msg="Removed pod sandbox: 1b4d4acfd68f35d8db3a9a6d80cc15c5d7f0fbd4ccb1377e7a8a3c6e0dd92841" id=b33c5b74-5b4a-44ab-81fd-55e161ac18b0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.626543562Z" level=info msg="Stopping pod sandbox: 3efc36e70bcc7d93c287438640b6eaa3bceb53302c10f80a87361790412f75ef" id=87d83a02-1c73-4ca3-a7a2-ed77f370f326 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.626607185Z" level=info msg="Stopped pod sandbox (already stopped): 3efc36e70bcc7d93c287438640b6eaa3bceb53302c10f80a87361790412f75ef" id=87d83a02-1c73-4ca3-a7a2-ed77f370f326 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.630335976Z" level=info msg="Removing pod sandbox: 3efc36e70bcc7d93c287438640b6eaa3bceb53302c10f80a87361790412f75ef" id=ff5fc3d5-8eae-43c3-abd8-a5e420a73221 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.636237547Z" level=info msg="Removed pod sandbox: 3efc36e70bcc7d93c287438640b6eaa3bceb53302c10f80a87361790412f75ef" id=ff5fc3d5-8eae-43c3-abd8-a5e420a73221 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.639316329Z" level=info msg="Stopping pod sandbox: b15b0b85c4c738f0010360468a8fd41acca851f6d986aed022369e467cbe5974" id=5ae597b6-02fe-4879-8c4a-d008b5f36e39 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.639377581Z" level=info msg="Stopped pod sandbox (already stopped): b15b0b85c4c738f0010360468a8fd41acca851f6d986aed022369e467cbe5974" id=5ae597b6-02fe-4879-8c4a-d008b5f36e39 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.640166717Z" level=info msg="Removing pod sandbox: b15b0b85c4c738f0010360468a8fd41acca851f6d986aed022369e467cbe5974" id=e05f0f60-643d-41bf-9c79-93540e914972 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 08 22:02:08 functional-101115 crio[3522]: time="2025-10-08T22:02:08.648876257Z" level=info msg="Removed pod sandbox: b15b0b85c4c738f0010360468a8fd41acca851f6d986aed022369e467cbe5974" id=e05f0f60-643d-41bf-9c79-93540e914972 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 08 22:02:19 functional-101115 crio[3522]: time="2025-10-08T22:02:19.46998238Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9feac131-6158-4c97-b809-dc2606075389 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:02:33 functional-101115 crio[3522]: time="2025-10-08T22:02:33.469463252Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=108b0c28-cefa-4468-9b3c-419cf25e4753 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:02:42 functional-101115 crio[3522]: time="2025-10-08T22:02:42.47005858Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6dff781-16b4-47f9-b890-4eb38919a8c6 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:03:22 functional-101115 crio[3522]: time="2025-10-08T22:03:22.469369541Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=068a5c92-57a2-45d8-a114-abddd2008863 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:03:31 functional-101115 crio[3522]: time="2025-10-08T22:03:31.469093974Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=605d991a-92eb-4641-8a5b-94899d789a9b name=/runtime.v1.ImageService/PullImage
	Oct 08 22:04:52 functional-101115 crio[3522]: time="2025-10-08T22:04:52.469974701Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7212b187-065b-4254-afb7-340d09b60202 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:05:04 functional-101115 crio[3522]: time="2025-10-08T22:05:04.469763572Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=90385010-4705-4832-8345-1f44697a17c6 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:07:45 functional-101115 crio[3522]: time="2025-10-08T22:07:45.469536362Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=84d8c1d9-abc1-4503-989f-598966bc3d05 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:07:46 functional-101115 crio[3522]: time="2025-10-08T22:07:46.46965051Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6e9793d5-2b31-4e74-8a5b-1b4e46034519 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	aa0037a09142a       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   1f454c4178cbf       sp-pod                                      default
	4a79082ed48e7       docker.io/library/nginx@sha256:9388e9644d1118a705af691f800b926c4683665f1f748234e1289add5f5a95cd   10 minutes ago      Running             nginx                     0                   ea8bd31a3e74d       nginx-svc                                   default
	4c3bb39d7059a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   4c98e4b99bb3b       kindnet-vs4xl                               kube-system
	b40da241a5f13       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   4b9a305f146b1       coredns-66bc5c9577-wxkhn                    kube-system
	dd57dc43dac02       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   d50cd547ad162       kube-proxy-zl9vj                            kube-system
	1ae5a055a5077       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   5372954bf2bc6       storage-provisioner                         kube-system
	7cd3a5489ab74       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   64dae0809da70       kube-apiserver-functional-101115            kube-system
	75916942ce3b1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   9de06c5574197       kube-scheduler-functional-101115            kube-system
	e5db4114a8907       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   97be1c66233c1       kube-controller-manager-functional-101115   kube-system
	b39c5f8ee6ea2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   01c8ceee8bbbf       etcd-functional-101115                      kube-system
	e42c1c5566f54       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   5372954bf2bc6       storage-provisioner                         kube-system
	1a8769f892886       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   4c98e4b99bb3b       kindnet-vs4xl                               kube-system
	786cddf60498b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   d50cd547ad162       kube-proxy-zl9vj                            kube-system
	7e4825694c847       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   97be1c66233c1       kube-controller-manager-functional-101115   kube-system
	0fd813dcdfd34       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   4b9a305f146b1       coredns-66bc5c9577-wxkhn                    kube-system
	f8f2adf8f0e0a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   01c8ceee8bbbf       etcd-functional-101115                      kube-system
	2232c0782cc26       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   9de06c5574197       kube-scheduler-functional-101115            kube-system
	
	
	==> coredns [0fd813dcdfd345398c2dd607ca345fee7568485429128af36017a534ca75bb9e] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55634 - 31725 "HINFO IN 5853740268447589084.4726024950520438158. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.058988762s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b40da241a5f13cb123af29af9caab3430e57209e40f0ff6c8f6d76bb4e4f198f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46996 - 407 "HINFO IN 2140058749956364170.8111923664599624571. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004309324s
	
	
	==> describe nodes <==
	Name:               functional-101115
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-101115
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=functional-101115
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T21_59_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 21:59:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-101115
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:11:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:10:42 +0000   Wed, 08 Oct 2025 21:59:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:10:42 +0000   Wed, 08 Oct 2025 21:59:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:10:42 +0000   Wed, 08 Oct 2025 21:59:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:10:42 +0000   Wed, 08 Oct 2025 22:00:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-101115
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 06026255b86048b6a6946511d35d3d3c
	  System UUID:                ba29d11f-c94a-450a-80fd-9c88cc2a1121
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-tn5t4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-65qcv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-wxkhn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-101115                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-vs4xl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-101115             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-101115    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zl9vj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-101115             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-101115 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-101115 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-101115 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-101115 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-101115 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-101115 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-101115 event: Registered Node functional-101115 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-101115 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-101115 event: Registered Node functional-101115 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-101115 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-101115 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-101115 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-101115 event: Registered Node functional-101115 in Controller
	
	
	==> dmesg <==
	[Oct 8 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015330] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.500107] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036203] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743682] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.166411] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 8 21:52] hrtimer: interrupt took 47692610 ns
	[ +22.956892] overlayfs: idmapped layers are currently not supported
	[  +0.073462] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 8 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 8 21:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b39c5f8ee6ea25ab49e1d00346d55ca66785218e9c51d5b66cabbf42bb01fc7a] <==
	{"level":"warn","ts":"2025-10-08T22:01:10.991499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.011085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.034969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.050698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.070667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.082476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.150497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.190170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.204006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.227722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.238896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.261962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.285462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.299222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.319197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.330511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.350688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.363840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.407835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.436019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.471710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:01:11.581733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56866","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-08T22:11:10.172593Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1132}
	{"level":"info","ts":"2025-10-08T22:11:10.196736Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1132,"took":"23.848986ms","hash":3597376999,"current-db-size-bytes":3272704,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1429504,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-08T22:11:10.196789Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3597376999,"revision":1132,"compact-revision":-1}
	
	
	==> etcd [f8f2adf8f0e0a76b90ad046fd4a27eae8eeefb22cf8fdf2546b89d78cc1fe8e7] <==
	{"level":"warn","ts":"2025-10-08T22:00:27.040362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:00:27.047639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:00:27.067369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:00:27.092912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:00:27.120142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:00:27.134668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:00:27.185473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56792","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-08T22:00:51.772744Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-08T22:00:51.772819Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-101115","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-08T22:00:51.772963Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-08T22:00:51.778559Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-08T22:00:51.915187Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-08T22:00:51.915301Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-08T22:00:51.915353Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-08T22:00:51.915364Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:00:51.915326Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-08T22:00:51.915398Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-08T22:00:51.915421Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-08T22:00:51.915499Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-08T22:00:51.915539Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-08T22:00:51.915570Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:00:51.919346Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-08T22:00:51.919439Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:00:51.919515Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-08T22:00:51.919543Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-101115","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:11:51 up 54 min,  0 user,  load average: 0.51, 0.51, 0.59
	Linux functional-101115 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a8769f8928866d833b01aafcca4f2c98b6e8bfd2a52837549ea904513e5192d] <==
	I1008 22:00:23.655004       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:00:23.655320       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1008 22:00:23.655443       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:00:23.655456       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:00:23.655466       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:00:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:00:24.049742       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:00:24.049853       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:00:24.049890       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:00:24.050101       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1008 22:00:28.553739       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:00:28.553873       1 metrics.go:72] Registering metrics
	I1008 22:00:28.553978       1 controller.go:711] "Syncing nftables rules"
	I1008 22:00:33.935960       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:00:33.936014       1 main.go:301] handling current node
	I1008 22:00:43.935955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:00:43.936012       1 main.go:301] handling current node
	
	
	==> kindnet [4c3bb39d7059a1a4a74131d9acc067763bd27bf99ffec951d832189cb7c7b4c9] <==
	I1008 22:09:43.212205       1 main.go:301] handling current node
	I1008 22:09:53.204983       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:09:53.205024       1 main.go:301] handling current node
	I1008 22:10:03.205773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:10:03.205809       1 main.go:301] handling current node
	I1008 22:10:13.209764       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:10:13.209867       1 main.go:301] handling current node
	I1008 22:10:23.204634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:10:23.204674       1 main.go:301] handling current node
	I1008 22:10:33.204633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:10:33.204667       1 main.go:301] handling current node
	I1008 22:10:43.204486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:10:43.204520       1 main.go:301] handling current node
	I1008 22:10:53.204650       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:10:53.204702       1 main.go:301] handling current node
	I1008 22:11:03.204477       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:11:03.204510       1 main.go:301] handling current node
	I1008 22:11:13.203712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:11:13.203748       1 main.go:301] handling current node
	I1008 22:11:23.211386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:11:23.211498       1 main.go:301] handling current node
	I1008 22:11:33.205220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:11:33.205277       1 main.go:301] handling current node
	I1008 22:11:43.211453       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1008 22:11:43.211489       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7cd3a5489ab741e37439365c5ee3366847c973806d8cffa86be09a5fa27648bc] <==
	I1008 22:01:12.467884       1 policy_source.go:240] refreshing policies
	I1008 22:01:12.477062       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:01:12.531848       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 22:01:12.531983       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 22:01:12.541413       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 22:01:12.541512       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 22:01:12.541657       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 22:01:12.542576       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 22:01:12.542823       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 22:01:12.543236       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1008 22:01:12.581874       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 22:01:12.641507       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:01:13.251549       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:01:14.618279       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 22:01:14.758665       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 22:01:14.835608       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:01:14.846212       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:01:15.896525       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 22:01:16.134415       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:01:16.184479       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:01:32.240020       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.152.21"}
	I1008 22:01:39.071146       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.186.180"}
	I1008 22:01:49.666547       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.112.111"}
	I1008 22:02:05.340757       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.152.141"}
	I1008 22:11:12.429828       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [7e4825694c84741fddbed0602f9a6d4b9e4acd50fe1ccbfc03c3092d2dffdb24] <==
	I1008 22:00:31.759684       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:00:31.759785       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:00:31.759819       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:00:31.759723       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1008 22:00:31.762508       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 22:00:31.764796       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 22:00:31.765289       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 22:00:31.765546       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 22:00:31.768792       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1008 22:00:31.768904       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:00:31.768956       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 22:00:31.775161       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:00:31.775268       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:00:31.775361       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-101115"
	I1008 22:00:31.775428       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 22:00:31.783811       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 22:00:31.787599       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 22:00:31.789765       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1008 22:00:31.791750       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 22:00:31.791787       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 22:00:31.792095       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 22:00:31.792124       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 22:00:31.792146       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:00:31.793334       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 22:00:31.794386       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-controller-manager [e5db4114a89071f96dff79047d40fccc4c3efb80bd180e197f6759549b1d2836] <==
	I1008 22:01:15.858517       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 22:01:15.861739       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 22:01:15.866930       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 22:01:15.868104       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 22:01:15.871718       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1008 22:01:15.877163       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:01:15.878331       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1008 22:01:15.878438       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:01:15.878446       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:01:15.878453       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:01:15.878512       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 22:01:15.879617       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 22:01:15.881924       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 22:01:15.881941       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 22:01:15.882093       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 22:01:15.883461       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1008 22:01:15.888158       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 22:01:15.890453       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1008 22:01:15.892080       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1008 22:01:15.892899       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:01:15.894407       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1008 22:01:15.895620       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1008 22:01:15.903578       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1008 22:01:15.917210       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 22:01:15.922531       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [786cddf60498b08d99303f6076c7112660eeb54f9536cc4aaf0b3e0076807766] <==
	I1008 22:00:26.695251       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:00:27.610673       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:00:28.513678       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:00:28.513720       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1008 22:00:28.513809       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:00:28.979507       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:00:28.979669       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:00:29.053811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:00:29.054492       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:00:29.054552       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:00:29.055928       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:00:29.055999       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:00:29.056309       1 config.go:200] "Starting service config controller"
	I1008 22:00:29.056358       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:00:29.056690       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:00:29.056740       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:00:29.057717       1 config.go:309] "Starting node config controller"
	I1008 22:00:29.057767       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:00:29.057808       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:00:29.156353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:00:29.156440       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:00:29.156791       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [dd57dc43dac02ab363aa45267dd194cf6602c9fb518358be670a34d46421f7f2] <==
	I1008 22:01:12.931333       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:01:13.059463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:01:13.160095       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:01:13.160132       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1008 22:01:13.160220       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:01:13.180970       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:01:13.181032       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:01:13.185727       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:01:13.186034       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:01:13.186058       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:01:13.187264       1 config.go:200] "Starting service config controller"
	I1008 22:01:13.187349       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:01:13.187392       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:01:13.187420       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:01:13.187463       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:01:13.187489       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:01:13.190069       1 config.go:309] "Starting node config controller"
	I1008 22:01:13.191037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:01:13.191102       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:01:13.288321       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 22:01:13.288361       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:01:13.288396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2232c0782cc2670db1f177c47e934690ad2d63bb0a469f26a1d62b7a8e12bd68] <==
	I1008 22:00:27.332100       1 serving.go:386] Generated self-signed cert in-memory
	I1008 22:00:28.753923       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 22:00:28.757716       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:00:28.812836       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 22:00:28.813435       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:00:28.813754       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:00:28.813836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:00:28.814150       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:00:28.813594       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 22:00:28.813051       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 22:00:28.816797       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 22:00:28.914347       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:00:28.914481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:00:28.925596       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 22:00:51.774399       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1008 22:00:51.774512       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1008 22:00:51.774524       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1008 22:00:51.774536       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1008 22:00:51.774561       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:00:51.774580       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:00:51.774845       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1008 22:00:51.774921       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [75916942ce3b117d856a6121045233099efbde256bd89ea8bfb2464fb9c82d2f] <==
	I1008 22:01:10.113992       1 serving.go:386] Generated self-signed cert in-memory
	I1008 22:01:12.622107       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 22:01:12.625858       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:01:12.643814       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 22:01:12.643968       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 22:01:12.644029       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 22:01:12.644099       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 22:01:12.645338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:01:12.645419       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:01:12.645464       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:01:12.645505       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:01:12.747175       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:01:12.747327       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 22:01:12.747415       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 22:09:12 functional-101115 kubelet[3838]: E1008 22:09:12.469730    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:09:24 functional-101115 kubelet[3838]: E1008 22:09:24.469417    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:09:27 functional-101115 kubelet[3838]: E1008 22:09:27.469685    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:09:35 functional-101115 kubelet[3838]: E1008 22:09:35.468979    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:09:39 functional-101115 kubelet[3838]: E1008 22:09:39.468663    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:09:49 functional-101115 kubelet[3838]: E1008 22:09:49.468870    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:09:50 functional-101115 kubelet[3838]: E1008 22:09:50.469059    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:10:01 functional-101115 kubelet[3838]: E1008 22:10:01.469555    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:10:03 functional-101115 kubelet[3838]: E1008 22:10:03.469279    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:10:14 functional-101115 kubelet[3838]: E1008 22:10:14.469328    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:10:14 functional-101115 kubelet[3838]: E1008 22:10:14.470098    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:10:26 functional-101115 kubelet[3838]: E1008 22:10:26.469434    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:10:26 functional-101115 kubelet[3838]: E1008 22:10:26.469865    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:10:39 functional-101115 kubelet[3838]: E1008 22:10:39.469606    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:10:41 functional-101115 kubelet[3838]: E1008 22:10:41.469485    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:10:53 functional-101115 kubelet[3838]: E1008 22:10:53.469220    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:10:55 functional-101115 kubelet[3838]: E1008 22:10:55.468744    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:11:06 functional-101115 kubelet[3838]: E1008 22:11:06.470486    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:11:10 functional-101115 kubelet[3838]: E1008 22:11:10.469877    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:11:21 functional-101115 kubelet[3838]: E1008 22:11:21.469687    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:11:24 functional-101115 kubelet[3838]: E1008 22:11:24.469367    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:11:35 functional-101115 kubelet[3838]: E1008 22:11:35.469112    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:11:35 functional-101115 kubelet[3838]: E1008 22:11:35.469175    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	Oct 08 22:11:48 functional-101115 kubelet[3838]: E1008 22:11:48.470360    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-65qcv" podUID="3866f1e9-62ed-4d2d-a647-61dfc501f265"
	Oct 08 22:11:49 functional-101115 kubelet[3838]: E1008 22:11:49.469426    3838 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn5t4" podUID="a4cf3e43-416b-4c92-846e-5a21d0d5df5c"
	
	
	==> storage-provisioner [1ae5a055a5077e566642d918131a0c47d29db2925e55de62d15eecfe30e7b701] <==
	W1008 22:11:27.125978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:29.128931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:29.133313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:31.135960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:31.142954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:33.145274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:33.149731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:35.153070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:35.159898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:37.163186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:37.167533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:39.170520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:39.176957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:41.179581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:41.184059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:43.187600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:43.192220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:45.195695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:45.206275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:47.210550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:47.214895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:49.218159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:49.222390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:51.226064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:11:51.233890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e42c1c5566f542746f692fff5798308a903a10d1f7179f6f82f6b3b53b02c66e] <==
	I1008 22:00:24.782566       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:00:28.610432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:00:28.610587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 22:00:28.648775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:32.106728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:36.367549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:39.966277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:43.020664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:46.043495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:46.055178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:00:46.055358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:00:46.055480       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"432c1376-b796-4625-8214-91d81bafdcc7", APIVersion:"v1", ResourceVersion:"561", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-101115_29f2badf-66b4-4f39-b62a-a2b6cf36bca4 became leader
	I1008 22:00:46.055550       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-101115_29f2badf-66b4-4f39-b62a-a2b6cf36bca4!
	W1008 22:00:46.058908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:46.068343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:00:46.156713       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-101115_29f2badf-66b4-4f39-b62a-a2b6cf36bca4!
	W1008 22:00:48.072413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:48.077434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:50.081539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:00:50.086671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101115 -n functional-101115
helpers_test.go:269: (dbg) Run:  kubectl --context functional-101115 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-tn5t4 hello-node-connect-7d85dfc575-65qcv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-101115 describe pod hello-node-75c85bcc94-tn5t4 hello-node-connect-7d85dfc575-65qcv
helpers_test.go:290: (dbg) kubectl --context functional-101115 describe pod hello-node-75c85bcc94-tn5t4 hello-node-connect-7d85dfc575-65qcv:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-tn5t4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-101115/192.168.49.2
	Start Time:       Wed, 08 Oct 2025 22:02:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g9bqj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g9bqj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-tn5t4 to functional-101115
	  Normal   Pulling    6m49s (x5 over 9m48s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m49s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m49s (x5 over 9m48s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m43s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m43s (x21 over 9m48s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-65qcv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-101115/192.168.49.2
	Start Time:       Wed, 08 Oct 2025 22:01:49 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8vv4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x8vv4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-65qcv to functional-101115
	  Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m3s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-101115 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-101115 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-tn5t4" [a4cf3e43-416b-4c92-846e-5a21d0d5df5c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1008 22:02:13.863959    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:04:29.998342    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:04:57.706361    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:09:29.998461    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101115 -n functional-101115
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-08 22:12:05.737847414 +0000 UTC m=+1305.289116410
functional_test.go:1460: (dbg) Run:  kubectl --context functional-101115 describe po hello-node-75c85bcc94-tn5t4 -n default
functional_test.go:1460: (dbg) kubectl --context functional-101115 describe po hello-node-75c85bcc94-tn5t4 -n default:
Name:             hello-node-75c85bcc94-tn5t4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-101115/192.168.49.2
Start Time:       Wed, 08 Oct 2025 22:02:05 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g9bqj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-g9bqj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-tn5t4 to functional-101115
Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-101115 logs hello-node-75c85bcc94-tn5t4 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-101115 logs hello-node-75c85bcc94-tn5t4 -n default: exit status 1 (118.50633ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-tn5t4" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-101115 logs hello-node-75c85bcc94-tn5t4 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 service --namespace=default --https --url hello-node: exit status 115 (541.965244ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32275
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-101115 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 service hello-node --url --format={{.IP}}: exit status 115 (541.84874ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-101115 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 service hello-node --url: exit status 115 (509.268139ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32275
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-101115 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32275
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image load --daemon kicbase/echo-server:functional-101115 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 image load --daemon kicbase/echo-server:functional-101115 --alsologtostderr: (2.248093478s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-101115" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image load --daemon kicbase/echo-server:functional-101115 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 image load --daemon kicbase/echo-server:functional-101115 --alsologtostderr: (1.164549059s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-101115" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-101115
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image load --daemon kicbase/echo-server:functional-101115 --alsologtostderr
2025/10/08 22:12:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-101115" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image save kicbase/echo-server:functional-101115 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1008 22:12:20.919170   32388 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:12:20.919367   32388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:12:20.919380   32388 out.go:374] Setting ErrFile to fd 2...
	I1008 22:12:20.919385   32388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:12:20.919692   32388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:12:20.920433   32388 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:12:20.920581   32388 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:12:20.921103   32388 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
	I1008 22:12:20.953368   32388 ssh_runner.go:195] Run: systemctl --version
	I1008 22:12:20.953426   32388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
	I1008 22:12:20.977746   32388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
	I1008 22:12:21.100309   32388 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1008 22:12:21.100376   32388 cache_images.go:254] Failed to load cached images for "functional-101115": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1008 22:12:21.100395   32388 cache_images.go:266] failed pushing to: functional-101115

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-101115
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image save --daemon kicbase/echo-server:functional-101115 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-101115
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-101115: exit status 1 (17.073214ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-101115

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-101115

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.26s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-881367 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-881367 --output=json --user=testUser: exit status 80 (2.258414892s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"74db72e2-10e8-40dc-b4d4-38b10398f38c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-881367 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"986d5ea7-211b-46dd-a98e-03792c5460b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-08T22:23:48Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"a97cbc94-d55c-4892-8674-38426f27a5dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-881367 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.26s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-881367 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-881367 --output=json --user=testUser: exit status 80 (1.814478441s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d12d4ac6-45be-4917-86af-bffe49d9a3c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-881367 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"58e6339b-dd05-459b-98bc-f561fffe1296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-08T22:23:50Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"908b9e90-22ff-4d76-973f-effb5dc18381","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-881367 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.82s)

                                                
                                    
x
+
TestPause/serial/Pause (7.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-326566 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-326566 --alsologtostderr -v=5: exit status 80 (2.040565612s)

                                                
                                                
-- stdout --
	* Pausing node pause-326566 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:42:20.055012  143600 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:42:20.055716  143600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:42:20.055726  143600 out.go:374] Setting ErrFile to fd 2...
	I1008 22:42:20.055731  143600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:42:20.056233  143600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:42:20.056841  143600 out.go:368] Setting JSON to false
	I1008 22:42:20.056870  143600 mustload.go:65] Loading cluster: pause-326566
	I1008 22:42:20.057817  143600 config.go:182] Loaded profile config "pause-326566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:42:20.058669  143600 cli_runner.go:164] Run: docker container inspect pause-326566 --format={{.State.Status}}
	I1008 22:42:20.077928  143600 host.go:66] Checking if "pause-326566" exists ...
	I1008 22:42:20.078253  143600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:42:20.138489  143600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-08 22:42:20.129094107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:42:20.139151  143600 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-326566 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1008 22:42:20.142419  143600 out.go:179] * Pausing node pause-326566 ... 
	I1008 22:42:20.146214  143600 host.go:66] Checking if "pause-326566" exists ...
	I1008 22:42:20.146575  143600 ssh_runner.go:195] Run: systemctl --version
	I1008 22:42:20.146622  143600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-326566
	I1008 22:42:20.165405  143600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32976 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/pause-326566/id_rsa Username:docker}
	I1008 22:42:20.267956  143600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:42:20.279660  143600 pause.go:52] kubelet running: true
	I1008 22:42:20.279727  143600 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:42:20.476671  143600 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:42:20.476763  143600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:42:20.545998  143600 cri.go:89] found id: "305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90"
	I1008 22:42:20.546022  143600 cri.go:89] found id: "a638f3395b215269fab29db65cadc1f832167d5f462cd7bf74a958baec1ff1f0"
	I1008 22:42:20.546027  143600 cri.go:89] found id: "42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335"
	I1008 22:42:20.546031  143600 cri.go:89] found id: "55079340c4801100ffcea067eaee3412c383d55dcc8eb14ea741569d0e165dba"
	I1008 22:42:20.546034  143600 cri.go:89] found id: "ac7fc70b72c25acddce469364720ff480418c83f06d50fae2989fdc64c174ae9"
	I1008 22:42:20.546069  143600 cri.go:89] found id: "9c97602a1b914b0bec562d1ab31e684e86b9cba84e0a222d12a95a6bf582b626"
	I1008 22:42:20.546077  143600 cri.go:89] found id: "5f550cc3256d69cf0a87aff890ec252d743b03a23efa555edab77b39192914f0"
	I1008 22:42:20.546081  143600 cri.go:89] found id: "85a40f569ce3eca8891a77785f3d9bfabe54a45c3f44e307f90d27d4713ffb05"
	I1008 22:42:20.546085  143600 cri.go:89] found id: "5d475c9e5e584ad9eb05a2319b77f94d21c2a22eaf77a8598a1f5dedf1846050"
	I1008 22:42:20.546097  143600 cri.go:89] found id: "ae50592c09b35c2007c94b7f04f51edf308e741400b22ae3b3ab8f45411d783c"
	I1008 22:42:20.546105  143600 cri.go:89] found id: "e6b64e927a5ef789afa398e08b5e4229bcbefde1093a6a6b5439195f7bfa1789"
	I1008 22:42:20.546108  143600 cri.go:89] found id: "fd4fdab202d98a69cef0468b988f6f07eafd55e1d522df910f09362c61f70214"
	I1008 22:42:20.546111  143600 cri.go:89] found id: "e957cf9693b4bae8678cbf4b0eb2f02a61ff250134c2dce2b23a883229c58f85"
	I1008 22:42:20.546114  143600 cri.go:89] found id: "874596efe54cd8d210100aedef1c06813d435d5cb3aa7beb1ddc5d46acc2129d"
	I1008 22:42:20.546118  143600 cri.go:89] found id: ""
	I1008 22:42:20.546179  143600 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:42:20.557014  143600 retry.go:31] will retry after 360.939634ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:42:20Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:42:20.918414  143600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:42:20.933002  143600 pause.go:52] kubelet running: false
	I1008 22:42:20.933085  143600 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:42:21.121585  143600 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:42:21.121754  143600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:42:21.215122  143600 cri.go:89] found id: "305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90"
	I1008 22:42:21.215143  143600 cri.go:89] found id: "a638f3395b215269fab29db65cadc1f832167d5f462cd7bf74a958baec1ff1f0"
	I1008 22:42:21.215148  143600 cri.go:89] found id: "42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335"
	I1008 22:42:21.215152  143600 cri.go:89] found id: "55079340c4801100ffcea067eaee3412c383d55dcc8eb14ea741569d0e165dba"
	I1008 22:42:21.215155  143600 cri.go:89] found id: "ac7fc70b72c25acddce469364720ff480418c83f06d50fae2989fdc64c174ae9"
	I1008 22:42:21.215159  143600 cri.go:89] found id: "9c97602a1b914b0bec562d1ab31e684e86b9cba84e0a222d12a95a6bf582b626"
	I1008 22:42:21.215162  143600 cri.go:89] found id: "5f550cc3256d69cf0a87aff890ec252d743b03a23efa555edab77b39192914f0"
	I1008 22:42:21.215165  143600 cri.go:89] found id: "85a40f569ce3eca8891a77785f3d9bfabe54a45c3f44e307f90d27d4713ffb05"
	I1008 22:42:21.215179  143600 cri.go:89] found id: "5d475c9e5e584ad9eb05a2319b77f94d21c2a22eaf77a8598a1f5dedf1846050"
	I1008 22:42:21.215185  143600 cri.go:89] found id: "ae50592c09b35c2007c94b7f04f51edf308e741400b22ae3b3ab8f45411d783c"
	I1008 22:42:21.215188  143600 cri.go:89] found id: "e6b64e927a5ef789afa398e08b5e4229bcbefde1093a6a6b5439195f7bfa1789"
	I1008 22:42:21.215191  143600 cri.go:89] found id: "fd4fdab202d98a69cef0468b988f6f07eafd55e1d522df910f09362c61f70214"
	I1008 22:42:21.215194  143600 cri.go:89] found id: "e957cf9693b4bae8678cbf4b0eb2f02a61ff250134c2dce2b23a883229c58f85"
	I1008 22:42:21.215197  143600 cri.go:89] found id: "874596efe54cd8d210100aedef1c06813d435d5cb3aa7beb1ddc5d46acc2129d"
	I1008 22:42:21.215200  143600 cri.go:89] found id: ""
	I1008 22:42:21.215247  143600 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:42:21.232306  143600 retry.go:31] will retry after 484.029958ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:42:21Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:42:21.716581  143600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:42:21.731327  143600 pause.go:52] kubelet running: false
	I1008 22:42:21.731396  143600 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:42:21.909620  143600 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:42:21.909726  143600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:42:21.992474  143600 cri.go:89] found id: "305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90"
	I1008 22:42:21.992499  143600 cri.go:89] found id: "a638f3395b215269fab29db65cadc1f832167d5f462cd7bf74a958baec1ff1f0"
	I1008 22:42:21.992505  143600 cri.go:89] found id: "42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335"
	I1008 22:42:21.992510  143600 cri.go:89] found id: "55079340c4801100ffcea067eaee3412c383d55dcc8eb14ea741569d0e165dba"
	I1008 22:42:21.992513  143600 cri.go:89] found id: "ac7fc70b72c25acddce469364720ff480418c83f06d50fae2989fdc64c174ae9"
	I1008 22:42:21.992519  143600 cri.go:89] found id: "9c97602a1b914b0bec562d1ab31e684e86b9cba84e0a222d12a95a6bf582b626"
	I1008 22:42:21.992523  143600 cri.go:89] found id: "5f550cc3256d69cf0a87aff890ec252d743b03a23efa555edab77b39192914f0"
	I1008 22:42:21.992526  143600 cri.go:89] found id: "85a40f569ce3eca8891a77785f3d9bfabe54a45c3f44e307f90d27d4713ffb05"
	I1008 22:42:21.992530  143600 cri.go:89] found id: "5d475c9e5e584ad9eb05a2319b77f94d21c2a22eaf77a8598a1f5dedf1846050"
	I1008 22:42:21.992536  143600 cri.go:89] found id: "ae50592c09b35c2007c94b7f04f51edf308e741400b22ae3b3ab8f45411d783c"
	I1008 22:42:21.992539  143600 cri.go:89] found id: "e6b64e927a5ef789afa398e08b5e4229bcbefde1093a6a6b5439195f7bfa1789"
	I1008 22:42:21.992542  143600 cri.go:89] found id: "fd4fdab202d98a69cef0468b988f6f07eafd55e1d522df910f09362c61f70214"
	I1008 22:42:21.992546  143600 cri.go:89] found id: "e957cf9693b4bae8678cbf4b0eb2f02a61ff250134c2dce2b23a883229c58f85"
	I1008 22:42:21.992551  143600 cri.go:89] found id: "874596efe54cd8d210100aedef1c06813d435d5cb3aa7beb1ddc5d46acc2129d"
	I1008 22:42:21.992559  143600 cri.go:89] found id: ""
	I1008 22:42:21.992618  143600 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:42:22.018251  143600 out.go:203] 
	W1008 22:42:22.021718  143600 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:42:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:42:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 22:42:22.021740  143600 out.go:285] * 
	* 
	W1008 22:42:22.028529  143600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:42:22.035573  143600 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-326566 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-326566
helpers_test.go:243: (dbg) docker inspect pause-326566:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae",
	        "Created": "2025-10-08T22:40:14.584242231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134212,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:40:14.656412575Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/hosts",
	        "LogPath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae-json.log",
	        "Name": "/pause-326566",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-326566:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-326566",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae",
	                "LowerDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-326566",
	                "Source": "/var/lib/docker/volumes/pause-326566/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-326566",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-326566",
	                "name.minikube.sigs.k8s.io": "pause-326566",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f69dfb4141b0810a6a14ec13a6140a1c2e17f8d0cff986c48302eb4c171b708",
	            "SandboxKey": "/var/run/docker/netns/2f69dfb4141b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-326566": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:12:b9:64:f5:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8574667d98cc0428a7357ba6e497b26200e15c9ad59a615abdcb59562ccceee",
	                    "EndpointID": "e7c08d8e8e3c939418c0465e29ff66a87dd3f91aea92f8f0eb28abe3e1353b43",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-326566",
	                        "4b94fbcd2eb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-326566 -n pause-326566
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-326566 -n pause-326566: exit status 2 (442.626473ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-326566 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-326566 logs -n 25: (1.719012373s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p test-preload-117053                                                                                                                   │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:37 UTC │
	│ start   │ -p test-preload-117053 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                        │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:37 UTC │
	│ image   │ test-preload-117053 image list                                                                                                           │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:37 UTC │
	│ delete  │ -p test-preload-117053                                                                                                                   │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:38 UTC │
	│ start   │ -p scheduled-stop-748542 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │ 08 Oct 25 22:38 UTC │
	│ stop    │ -p scheduled-stop-748542 --schedule 5m                                                                                                   │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 5m                                                                                                   │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 5m                                                                                                   │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --cancel-scheduled                                                                                              │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │ 08 Oct 25 22:38 UTC │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │ 08 Oct 25 22:39 UTC │
	│ delete  │ -p scheduled-stop-748542                                                                                                                 │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │ 08 Oct 25 22:39 UTC │
	│ start   │ -p insufficient-storage-299212 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-299212 │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │                     │
	│ delete  │ -p insufficient-storage-299212                                                                                                           │ insufficient-storage-299212 │ jenkins │ v1.37.0 │ 08 Oct 25 22:40 UTC │ 08 Oct 25 22:40 UTC │
	│ start   │ -p pause-326566 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-326566                │ jenkins │ v1.37.0 │ 08 Oct 25 22:40 UTC │ 08 Oct 25 22:41 UTC │
	│ start   │ -p missing-upgrade-336831 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-336831      │ jenkins │ v1.32.0 │ 08 Oct 25 22:40 UTC │ 08 Oct 25 22:41 UTC │
	│ start   │ -p missing-upgrade-336831 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-336831      │ jenkins │ v1.37.0 │ 08 Oct 25 22:41 UTC │ 08 Oct 25 22:42 UTC │
	│ start   │ -p pause-326566 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-326566                │ jenkins │ v1.37.0 │ 08 Oct 25 22:41 UTC │ 08 Oct 25 22:42 UTC │
	│ delete  │ -p missing-upgrade-336831                                                                                                                │ missing-upgrade-336831      │ jenkins │ v1.37.0 │ 08 Oct 25 22:42 UTC │ 08 Oct 25 22:42 UTC │
	│ start   │ -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-445308   │ jenkins │ v1.37.0 │ 08 Oct 25 22:42 UTC │                     │
	│ pause   │ -p pause-326566 --alsologtostderr -v=5                                                                                                   │ pause-326566                │ jenkins │ v1.37.0 │ 08 Oct 25 22:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:42:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:42:09.388224  142502 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:42:09.388396  142502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:42:09.388406  142502 out.go:374] Setting ErrFile to fd 2...
	I1008 22:42:09.388412  142502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:42:09.388666  142502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:42:09.389085  142502 out.go:368] Setting JSON to false
	I1008 22:42:09.390029  142502 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5080,"bootTime":1759958250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:42:09.390098  142502 start.go:141] virtualization:  
	I1008 22:42:09.393386  142502 out.go:179] * [kubernetes-upgrade-445308] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:42:09.397360  142502 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:42:09.397525  142502 notify.go:220] Checking for updates...
	I1008 22:42:09.403539  142502 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:42:09.406539  142502 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:42:09.409510  142502 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:42:09.412426  142502 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:42:09.415290  142502 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:42:09.418706  142502 config.go:182] Loaded profile config "pause-326566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:42:09.418861  142502 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:42:09.454765  142502 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:42:09.454933  142502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:42:09.517888  142502 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-08 22:42:09.508657289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:42:09.518004  142502 docker.go:318] overlay module found
	I1008 22:42:09.523082  142502 out.go:179] * Using the docker driver based on user configuration
	I1008 22:42:09.526060  142502 start.go:305] selected driver: docker
	I1008 22:42:09.526079  142502 start.go:925] validating driver "docker" against <nil>
	I1008 22:42:09.526141  142502 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:42:09.526950  142502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:42:09.611553  142502 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-08 22:42:09.601812412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:42:09.611728  142502 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:42:09.611948  142502 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 22:42:09.615036  142502 out.go:179] * Using Docker driver with root privileges
	I1008 22:42:09.617918  142502 cni.go:84] Creating CNI manager for ""
	I1008 22:42:09.617989  142502 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:42:09.618004  142502 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:42:09.618085  142502 start.go:349] cluster config:
	{Name:kubernetes-upgrade-445308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-445308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:42:09.621133  142502 out.go:179] * Starting "kubernetes-upgrade-445308" primary control-plane node in "kubernetes-upgrade-445308" cluster
	I1008 22:42:09.623972  142502 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:42:09.626856  142502 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:42:09.629879  142502 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:42:09.629936  142502 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1008 22:42:09.629948  142502 cache.go:58] Caching tarball of preloaded images
	I1008 22:42:09.629970  142502 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:42:09.630032  142502 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:42:09.630043  142502 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1008 22:42:09.630159  142502 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/kubernetes-upgrade-445308/config.json ...
	I1008 22:42:09.630175  142502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/kubernetes-upgrade-445308/config.json: {Name:mk5e099737877ad6104a03617eca723fba82bb27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:42:09.650363  142502 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:42:09.650389  142502 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:42:09.650409  142502 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:42:09.650432  142502 start.go:360] acquireMachinesLock for kubernetes-upgrade-445308: {Name:mk89d35ceafcd0eb1be6da2953201407a9fee31f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:42:09.650546  142502 start.go:364] duration metric: took 92.743µs to acquireMachinesLock for "kubernetes-upgrade-445308"
	I1008 22:42:09.650611  142502 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-445308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-445308 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:42:09.650676  142502 start.go:125] createHost starting for "" (driver="docker")
	W1008 22:42:06.008699  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	W1008 22:42:08.012225  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	I1008 22:42:09.654042  142502 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:42:09.654284  142502 start.go:159] libmachine.API.Create for "kubernetes-upgrade-445308" (driver="docker")
	I1008 22:42:09.654330  142502 client.go:168] LocalClient.Create starting
	I1008 22:42:09.654442  142502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:42:09.654479  142502 main.go:141] libmachine: Decoding PEM data...
	I1008 22:42:09.654503  142502 main.go:141] libmachine: Parsing certificate...
	I1008 22:42:09.654560  142502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:42:09.654584  142502 main.go:141] libmachine: Decoding PEM data...
	I1008 22:42:09.654598  142502 main.go:141] libmachine: Parsing certificate...
	I1008 22:42:09.654961  142502 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-445308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:42:09.675787  142502 cli_runner.go:211] docker network inspect kubernetes-upgrade-445308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:42:09.675883  142502 network_create.go:284] running [docker network inspect kubernetes-upgrade-445308] to gather additional debugging logs...
	I1008 22:42:09.675905  142502 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-445308
	W1008 22:42:09.693880  142502 cli_runner.go:211] docker network inspect kubernetes-upgrade-445308 returned with exit code 1
	I1008 22:42:09.693913  142502 network_create.go:287] error running [docker network inspect kubernetes-upgrade-445308]: docker network inspect kubernetes-upgrade-445308: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-445308 not found
	I1008 22:42:09.693927  142502 network_create.go:289] output of [docker network inspect kubernetes-upgrade-445308]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-445308 not found
	
	** /stderr **
	I1008 22:42:09.694034  142502 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:42:09.710829  142502 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:42:09.711159  142502 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:42:09.711441  142502 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:42:09.711754  142502 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a8574667d98c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:b4:86:97:e5:85} reservation:<nil>}
	I1008 22:42:09.712171  142502 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001aaeb70}
	I1008 22:42:09.712196  142502 network_create.go:124] attempt to create docker network kubernetes-upgrade-445308 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 22:42:09.712267  142502 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 kubernetes-upgrade-445308
	I1008 22:42:09.780315  142502 network_create.go:108] docker network kubernetes-upgrade-445308 192.168.85.0/24 created
	I1008 22:42:09.780350  142502 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-445308" container
	I1008 22:42:09.780433  142502 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:42:09.796984  142502 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-445308 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:42:09.815837  142502 oci.go:103] Successfully created a docker volume kubernetes-upgrade-445308
	I1008 22:42:09.815922  142502 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-445308-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --entrypoint /usr/bin/test -v kubernetes-upgrade-445308:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:42:10.410510  142502 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-445308
	I1008 22:42:10.410554  142502 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:42:10.410574  142502 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:42:10.410660  142502 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-445308:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1008 22:42:10.017334  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	W1008 22:42:12.020080  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	W1008 22:42:14.507448  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	I1008 22:42:16.297120  142502 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-445308:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.886421136s)
	I1008 22:42:16.297154  142502 kic.go:203] duration metric: took 5.886576527s to extract preloaded images to volume ...
	W1008 22:42:16.297298  142502 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:42:16.297412  142502 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:42:16.348617  142502 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-445308 --name kubernetes-upgrade-445308 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --network kubernetes-upgrade-445308 --ip 192.168.85.2 --volume kubernetes-upgrade-445308:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:42:16.631077  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Running}}
	I1008 22:42:16.654221  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Status}}
	I1008 22:42:16.679079  142502 cli_runner.go:164] Run: docker exec kubernetes-upgrade-445308 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:42:16.732253  142502 oci.go:144] the created container "kubernetes-upgrade-445308" has a running status.
	I1008 22:42:16.732285  142502 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa...
	I1008 22:42:17.626230  142502 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:42:17.647238  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Status}}
	I1008 22:42:17.666172  142502 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:42:17.666194  142502 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-445308 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:42:17.708058  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Status}}
	I1008 22:42:17.724886  142502 machine.go:93] provisionDockerMachine start ...
	I1008 22:42:17.724989  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:17.741378  142502 main.go:141] libmachine: Using SSH client type: native
	I1008 22:42:17.741772  142502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32991 <nil> <nil>}
	I1008 22:42:17.741787  142502 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:42:17.742461  142502 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1008 22:42:17.014560  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	I1008 22:42:18.508167  138960 pod_ready.go:94] pod "coredns-66bc5c9577-c6ps2" is "Ready"
	I1008 22:42:18.508239  138960 pod_ready.go:86] duration metric: took 19.506745783s for pod "coredns-66bc5c9577-c6ps2" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.514774  138960 pod_ready.go:83] waiting for pod "etcd-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.522176  138960 pod_ready.go:94] pod "etcd-pause-326566" is "Ready"
	I1008 22:42:18.522198  138960 pod_ready.go:86] duration metric: took 7.404349ms for pod "etcd-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.525479  138960 pod_ready.go:83] waiting for pod "kube-apiserver-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.532146  138960 pod_ready.go:94] pod "kube-apiserver-pause-326566" is "Ready"
	I1008 22:42:18.532171  138960 pod_ready.go:86] duration metric: took 6.629514ms for pod "kube-apiserver-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.535365  138960 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.705732  138960 pod_ready.go:94] pod "kube-controller-manager-pause-326566" is "Ready"
	I1008 22:42:18.705761  138960 pod_ready.go:86] duration metric: took 170.374243ms for pod "kube-controller-manager-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.906179  138960 pod_ready.go:83] waiting for pod "kube-proxy-vs6x9" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.305149  138960 pod_ready.go:94] pod "kube-proxy-vs6x9" is "Ready"
	I1008 22:42:19.305177  138960 pod_ready.go:86] duration metric: took 398.969336ms for pod "kube-proxy-vs6x9" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.505275  138960 pod_ready.go:83] waiting for pod "kube-scheduler-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.905586  138960 pod_ready.go:94] pod "kube-scheduler-pause-326566" is "Ready"
	I1008 22:42:19.905616  138960 pod_ready.go:86] duration metric: took 400.311734ms for pod "kube-scheduler-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.905651  138960 pod_ready.go:40] duration metric: took 20.908439429s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:42:19.960011  138960 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:42:19.963040  138960 out.go:179] * Done! kubectl is now configured to use "pause-326566" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.842067009Z" level=info msg="Starting container: 42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335" id=026f9490-da76-491a-8f25-94952770d77c name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.844896875Z" level=info msg="Started container" PID=2148 containerID=55079340c4801100ffcea067eaee3412c383d55dcc8eb14ea741569d0e165dba description=kube-system/kube-apiserver-pause-326566/kube-apiserver id=53e6fefd-b273-43f8-89f0-9966db57ea1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a087307333c11cd5e9bca9b37a49999b60d3e4397d7b989c11f7b207d79c7bfa
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.845756462Z" level=info msg="Started container" PID=2159 containerID=a638f3395b215269fab29db65cadc1f832167d5f462cd7bf74a958baec1ff1f0 description=kube-system/kindnet-blfz9/kindnet-cni id=ce048d48-e66f-49e0-9681-3f301517b7d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a3f53cde70909be6abfd2454cbfa546f5dcee6d0b46320ab71cc30573f0025
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.851333049Z" level=info msg="Started container" PID=2167 containerID=42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335 description=kube-system/kube-scheduler-pause-326566/kube-scheduler id=026f9490-da76-491a-8f25-94952770d77c name=/runtime.v1.RuntimeService/StartContainer sandboxID=537eae1c9877664f517df87e85bb1a338eabb6fc8db148cebc1b9a73595ab084
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.21315355Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=979c79a8-5035-407b-9330-cc02c7f14523 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.214447217Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=ba08337c-3c6f-4a98-9987-8c043fb8abbc name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.217903617Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-c6ps2/coredns" id=1091b0cb-7660-4027-aa90-924c47fd300d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.218257689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.230734972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.231683553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.328043305Z" level=info msg="Created container 305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90: kube-system/coredns-66bc5c9577-c6ps2/coredns" id=1091b0cb-7660-4027-aa90-924c47fd300d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.333616241Z" level=info msg="Starting container: 305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90" id=82038980-c403-428c-8852-527be30a5866 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.335500816Z" level=info msg="Started container" PID=2440 containerID=305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90 description=kube-system/coredns-66bc5c9577-c6ps2/coredns id=82038980-c403-428c-8852-527be30a5866 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8379f968ba0a0a6d89d850f8884ddbf1b4c41c155cc3e9e3b5c5ebca73dcdf85
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.341777356Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.355924221Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.356146747Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.357484255Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.374587986Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.374779455Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.374869975Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.394078476Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.394267992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.394354573Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.399345275Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.399552629Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	305f705616ea5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   28 seconds ago       Running             coredns                   1                   8379f968ba0a0       coredns-66bc5c9577-c6ps2               kube-system
	a638f3395b215       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   33 seconds ago       Running             kindnet-cni               1                   02a3f53cde709       kindnet-blfz9                          kube-system
	42ed69aa2376e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   33 seconds ago       Running             kube-scheduler            1                   537eae1c98776       kube-scheduler-pause-326566            kube-system
	55079340c4801       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   33 seconds ago       Running             kube-apiserver            1                   a087307333c11       kube-apiserver-pause-326566            kube-system
	ac7fc70b72c25       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   33 seconds ago       Running             etcd                      1                   3f32493492af3       etcd-pause-326566                      kube-system
	9c97602a1b914       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   33 seconds ago       Running             kube-proxy                1                   1028fa8618383       kube-proxy-vs6x9                       kube-system
	5f550cc3256d6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   33 seconds ago       Running             kube-controller-manager   1                   02a84dd8eb9da       kube-controller-manager-pause-326566   kube-system
	85a40f569ce3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   46 seconds ago       Exited              coredns                   0                   8379f968ba0a0       coredns-66bc5c9577-c6ps2               kube-system
	5d475c9e5e584       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   1028fa8618383       kube-proxy-vs6x9                       kube-system
	ae50592c09b35       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   02a3f53cde709       kindnet-blfz9                          kube-system
	e6b64e927a5ef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   537eae1c98776       kube-scheduler-pause-326566            kube-system
	fd4fdab202d98       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   3f32493492af3       etcd-pause-326566                      kube-system
	e957cf9693b4b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   02a84dd8eb9da       kube-controller-manager-pause-326566   kube-system
	874596efe54cd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   a087307333c11       kube-apiserver-pause-326566            kube-system
	
	
	==> coredns [305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38224 - 25536 "HINFO IN 3626398726629763703.4107288149810172931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016400289s
	
	
	==> coredns [85a40f569ce3eca8891a77785f3d9bfabe54a45c3f44e307f90d27d4713ffb05] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59816 - 24994 "HINFO IN 8869420070307608886.5237573492754804969. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044203978s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-326566
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-326566
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=pause-326566
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_40_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-326566
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:42:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:40:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:40:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:40:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:42:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-326566
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56d89e5d28c749a7bec21082e2dc2094
	  System UUID:                2cc337a4-fb8e-4954-9852-59a8a41e72ee
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-c6ps2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     88s
	  kube-system                 etcd-pause-326566                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         96s
	  kube-system                 kindnet-blfz9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-pause-326566             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-pause-326566    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-vs6x9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-pause-326566             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 87s                  kube-proxy       
	  Normal   Starting                 25s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  104s (x8 over 105s)  kubelet          Node pause-326566 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s (x8 over 105s)  kubelet          Node pause-326566 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x8 over 105s)  kubelet          Node pause-326566 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 94s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  94s                  kubelet          Node pause-326566 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s                  kubelet          Node pause-326566 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     94s                  kubelet          Node pause-326566 status is now: NodeHasSufficientPID
	  Normal   Starting                 94s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           90s                  node-controller  Node pause-326566 event: Registered Node pause-326566 in Controller
	  Warning  ContainerGCFailed        34s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             33s                  kubelet          Node pause-326566 status is now: NodeNotReady
	  Normal   RegisteredNode           22s                  node-controller  Node pause-326566 event: Registered Node pause-326566 in Controller
	  Normal   NodeReady                5s (x2 over 47s)     kubelet          Node pause-326566 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:17] overlayfs: idmapped layers are currently not supported
	[  +3.473782] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:18] overlayfs: idmapped layers are currently not supported
	[ +40.002132] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:19] overlayfs: idmapped layers are currently not supported
	[  +3.771758] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:20] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:21] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:22] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:27] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ac7fc70b72c25acddce469364720ff480418c83f06d50fae2989fdc64c174ae9] <==
	{"level":"warn","ts":"2025-10-08T22:41:53.580002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.646361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.758279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.766552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.817817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.876037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.895431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.945210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.991771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.058473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.069476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.113974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.150423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.190189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.223993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.300526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.344584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.432785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.484413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.514693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.606620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.644043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.731690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.771133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.869174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54134","server-name":"","error":"EOF"}
	
	
	==> etcd [fd4fdab202d98a69cef0468b988f6f07eafd55e1d522df910f09362c61f70214] <==
	{"level":"warn","ts":"2025-10-08T22:40:44.745322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.759386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.781533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.806287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.845166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.848151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.951222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-08T22:41:42.160361Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-08T22:41:42.160419Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-326566","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-08T22:41:42.160541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-08T22:41:44.879610Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-08T22:41:44.879691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:41:44.879714Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-08T22:41:44.879818Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-08T22:41:44.879838Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880071Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880107Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-08T22:41:44.880116Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880166Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880179Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-08T22:41:44.880186Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:41:44.883293Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-08T22:41:44.883371Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:41:44.883404Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-08T22:41:44.883412Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-326566","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 22:42:23 up  1:24,  0 user,  load average: 3.22, 2.03, 1.81
	Linux pause-326566 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a638f3395b215269fab29db65cadc1f832167d5f462cd7bf74a958baec1ff1f0] <==
	I1008 22:41:50.013618       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:41:50.023493       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1008 22:41:50.023629       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:41:50.023642       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:41:50.023653       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:41:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:41:50.341421       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:41:50.341514       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:41:50.341550       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:41:50.345410       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1008 22:41:57.646308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:41:57.646386       1 metrics.go:72] Registering metrics
	I1008 22:41:57.646453       1 controller.go:711] "Syncing nftables rules"
	I1008 22:42:00.341278       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:42:00.341376       1 main.go:301] handling current node
	I1008 22:42:10.341855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:42:10.341888       1 main.go:301] handling current node
	I1008 22:42:20.341217       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:42:20.341283       1 main.go:301] handling current node
	
	
	==> kindnet [ae50592c09b35c2007c94b7f04f51edf308e741400b22ae3b3ab8f45411d783c] <==
	I1008 22:40:55.721682       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:40:55.722105       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1008 22:40:55.722235       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:40:55.722246       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:40:55.722256       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:40:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:40:56.017121       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:40:56.025926       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:40:56.025984       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:40:56.028140       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:41:26.017413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:41:26.017413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:41:26.022087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1008 22:41:26.028804       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1008 22:41:27.228649       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:41:27.228682       1 metrics.go:72] Registering metrics
	I1008 22:41:27.228756       1 controller.go:711] "Syncing nftables rules"
	I1008 22:41:36.019235       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:41:36.019299       1 main.go:301] handling current node
	
	
	==> kube-apiserver [55079340c4801100ffcea067eaee3412c383d55dcc8eb14ea741569d0e165dba] <==
	I1008 22:41:57.471181       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1008 22:41:57.477960       1 policy_source.go:240] refreshing policies
	I1008 22:41:57.506379       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 22:41:57.509924       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 22:41:57.510049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 22:41:57.512690       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1008 22:41:57.522839       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:41:57.530226       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:41:57.546110       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 22:41:57.546234       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 22:41:57.546260       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 22:41:57.546267       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 22:41:57.546378       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 22:41:57.602192       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 22:41:57.609767       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 22:41:57.624410       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 22:41:57.631493       1 cache.go:39] Caches are synced for autoregister controller
	E1008 22:41:57.656968       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 22:41:57.677279       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:41:59.685417       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:42:01.316957       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:42:01.347920       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 22:42:01.376277       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:42:01.517958       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 22:42:01.567352       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [874596efe54cd8d210100aedef1c06813d435d5cb3aa7beb1ddc5d46acc2129d] <==
	W1008 22:41:43.186488       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186604       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186718       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186800       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186841       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186886       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.187106       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.189545       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.190772       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.191045       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.191098       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.191251       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193688       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193771       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193688       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193815       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199282       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199426       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199593       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199778       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199829       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199795       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199918       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199958       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.202457       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5f550cc3256d69cf0a87aff890ec252d743b03a23efa555edab77b39192914f0] <==
	I1008 22:42:01.190956       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 22:42:01.201112       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1008 22:42:01.208331       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:42:01.210399       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 22:42:01.210977       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:42:01.212493       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 22:42:01.217706       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 22:42:01.217878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:42:01.218582       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 22:42:01.225850       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 22:42:01.226095       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:42:01.226368       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:42:01.226481       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-326566"
	I1008 22:42:01.226551       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 22:42:01.226608       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 22:42:01.229809       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1008 22:42:01.246820       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 22:42:01.262903       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 22:42:01.262994       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 22:42:01.275106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:42:01.275338       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:42:01.275385       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:42:01.275234       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:42:01.275307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:42:21.229512       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [e957cf9693b4bae8678cbf4b0eb2f02a61ff250134c2dce2b23a883229c58f85] <==
	I1008 22:40:53.891640       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 22:40:53.892574       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 22:40:53.892886       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 22:40:53.909886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:40:53.910090       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 22:40:53.911027       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 22:40:53.911239       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 22:40:53.932693       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:40:53.932862       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:40:53.932903       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:40:53.932830       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1008 22:40:53.932842       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 22:40:53.932794       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 22:40:53.932815       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 22:40:53.933227       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:40:53.937743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:40:53.937852       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 22:40:53.937927       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 22:40:53.943559       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 22:40:53.943648       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1008 22:40:53.963035       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 22:40:53.982724       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 22:40:53.982845       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1008 22:40:53.989715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:41:38.897945       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5d475c9e5e584ad9eb05a2319b77f94d21c2a22eaf77a8598a1f5dedf1846050] <==
	I1008 22:40:55.955152       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:40:56.095445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:40:56.220727       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:40:56.220831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1008 22:40:56.220944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:40:56.245988       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:40:56.246110       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:40:56.251200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:40:56.252131       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:40:56.252209       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:40:56.257133       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:40:56.257211       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:40:56.257705       1 config.go:200] "Starting service config controller"
	I1008 22:40:56.257765       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:40:56.258166       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:40:56.262168       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:40:56.259569       1 config.go:309] "Starting node config controller"
	I1008 22:40:56.262313       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:40:56.263840       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:40:56.357510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:40:56.358725       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:40:56.369777       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [9c97602a1b914b0bec562d1ab31e684e86b9cba84e0a222d12a95a6bf582b626] <==
	I1008 22:41:53.837389       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:41:55.087157       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:41:57.625968       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:41:57.625996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1008 22:41:57.626073       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:41:57.993330       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:41:57.993451       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:41:58.085232       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:41:58.085655       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:41:58.085981       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:41:58.087984       1 config.go:200] "Starting service config controller"
	I1008 22:41:58.088055       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:41:58.088103       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:41:58.088130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:41:58.088173       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:41:58.088200       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:41:58.132749       1 config.go:309] "Starting node config controller"
	I1008 22:41:58.141965       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:41:58.191574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 22:41:58.191919       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:41:58.191948       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:41:58.249046       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335] <==
	I1008 22:41:57.329945       1 serving.go:386] Generated self-signed cert in-memory
	I1008 22:42:00.939948       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 22:42:00.939987       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:42:00.947802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 22:42:00.947997       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 22:42:00.948051       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 22:42:00.948101       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 22:42:00.963404       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:42:00.963571       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:42:00.963624       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:42:00.963656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:42:01.056733       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 22:42:01.064624       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:42:01.065782       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [e6b64e927a5ef789afa398e08b5e4229bcbefde1093a6a6b5439195f7bfa1789] <==
	E1008 22:40:47.036465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 22:40:47.036534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 22:40:47.036573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 22:40:47.036666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:40:47.036718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 22:40:47.036771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 22:40:47.036834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 22:40:47.036880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 22:40:47.036930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 22:40:47.036973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 22:40:47.037022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 22:40:47.039705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 22:40:47.039766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 22:40:47.039814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 22:40:47.039888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 22:40:47.039922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 22:40:47.048164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:40:48.075223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1008 22:40:49.880993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:41:42.184988       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1008 22:41:42.185025       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1008 22:41:42.185060       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1008 22:41:42.185113       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:41:42.222298       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1008 22:41:42.222426       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.490833    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="331b114b916326e1fcaba026d0192f8e" pod="kube-system/kube-controller-manager-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.499245    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-c6ps2" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.500917    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="331b114b916326e1fcaba026d0192f8e" pod="kube-system/kube-controller-manager-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.502147    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-blfz9\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e5a1288e-4110-49f9-85bf-0f418d80b6b2" pod="kube-system/kindnet-blfz9"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.502872    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs6x9\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1acc7ed2-b8d7-43e3-bbad-880f4ad69813" pod="kube-system/kube-proxy-vs6x9"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.503484    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-c6ps2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990" pod="kube-system/coredns-66bc5c9577-c6ps2"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.504042    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="49fd2db924ccd3cab82fddf4c055bc87" pod="kube-system/kube-scheduler-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.504613    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d2e64967bfbc02363d2b3b85b86949a" pod="kube-system/etcd-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.505321    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="fb81b5471683d9ef68fe924f3df9ead2" pod="kube-system/kube-apiserver-pause-326566"
	Oct 08 22:41:50 pause-326566 kubelet[1300]: I1008 22:41:50.670266    1300 setters.go:543] "Node became not ready" node="pause-326566" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T22:41:50Z","lastTransitionTime":"2025-10-08T22:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Oct 08 22:41:51 pause-326566 kubelet[1300]: E1008 22:41:51.217265    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-c6ps2" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990"
	Oct 08 22:41:53 pause-326566 kubelet[1300]: E1008 22:41:53.209943    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-c6ps2" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990"
	Oct 08 22:41:55 pause-326566 kubelet[1300]: I1008 22:41:55.210623    1300 scope.go:117] "RemoveContainer" containerID="85a40f569ce3eca8891a77785f3d9bfabe54a45c3f44e307f90d27d4713ffb05"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153413    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="49fd2db924ccd3cab82fddf4c055bc87" pod="kube-system/kube-scheduler-pause-326566"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153743    1300 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-326566\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153795    1300 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-326566\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153821    1300 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-326566\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.288025    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="9d2e64967bfbc02363d2b3b85b86949a" pod="kube-system/etcd-pause-326566"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.464621    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="fb81b5471683d9ef68fe924f3df9ead2" pod="kube-system/kube-apiserver-pause-326566"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.495297    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="331b114b916326e1fcaba026d0192f8e" pod="kube-system/kube-controller-manager-pause-326566"
	Oct 08 22:41:59 pause-326566 kubelet[1300]: W1008 22:41:59.528969    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 08 22:42:09 pause-326566 kubelet[1300]: W1008 22:42:09.551160    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 08 22:42:20 pause-326566 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 22:42:20 pause-326566 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 22:42:20 pause-326566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-326566 -n pause-326566
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-326566 -n pause-326566: exit status 2 (503.032475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-326566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-326566
helpers_test.go:243: (dbg) docker inspect pause-326566:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae",
	        "Created": "2025-10-08T22:40:14.584242231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134212,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:40:14.656412575Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/hosts",
	        "LogPath": "/var/lib/docker/containers/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae/4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae-json.log",
	        "Name": "/pause-326566",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-326566:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-326566",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b94fbcd2eb59a4a016d718dab48330524fe8ddba9250450b5ba2d433f94d5ae",
	                "LowerDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318/merged",
	                "UpperDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318/diff",
	                "WorkDir": "/var/lib/docker/overlay2/db416229ffa96f339efe4bf6bc116739631731f24f6685842fa3dd9b80ff1318/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-326566",
	                "Source": "/var/lib/docker/volumes/pause-326566/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-326566",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-326566",
	                "name.minikube.sigs.k8s.io": "pause-326566",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f69dfb4141b0810a6a14ec13a6140a1c2e17f8d0cff986c48302eb4c171b708",
	            "SandboxKey": "/var/run/docker/netns/2f69dfb4141b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-326566": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:12:b9:64:f5:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8574667d98cc0428a7357ba6e497b26200e15c9ad59a615abdcb59562ccceee",
	                    "EndpointID": "e7c08d8e8e3c939418c0465e29ff66a87dd3f91aea92f8f0eb28abe3e1353b43",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-326566",
	                        "4b94fbcd2eb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-326566 -n pause-326566
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-326566 -n pause-326566: exit status 2 (453.133628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-326566 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-326566 logs -n 25: (1.801090843s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p test-preload-117053                                                                                                                   │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:37 UTC │
	│ start   │ -p test-preload-117053 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                        │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:37 UTC │
	│ image   │ test-preload-117053 image list                                                                                                           │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:37 UTC │
	│ delete  │ -p test-preload-117053                                                                                                                   │ test-preload-117053         │ jenkins │ v1.37.0 │ 08 Oct 25 22:37 UTC │ 08 Oct 25 22:38 UTC │
	│ start   │ -p scheduled-stop-748542 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │ 08 Oct 25 22:38 UTC │
	│ stop    │ -p scheduled-stop-748542 --schedule 5m                                                                                                   │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 5m                                                                                                   │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 5m                                                                                                   │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --cancel-scheduled                                                                                              │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:38 UTC │ 08 Oct 25 22:38 UTC │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │                     │
	│ stop    │ -p scheduled-stop-748542 --schedule 15s                                                                                                  │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │ 08 Oct 25 22:39 UTC │
	│ delete  │ -p scheduled-stop-748542                                                                                                                 │ scheduled-stop-748542       │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │ 08 Oct 25 22:39 UTC │
	│ start   │ -p insufficient-storage-299212 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-299212 │ jenkins │ v1.37.0 │ 08 Oct 25 22:39 UTC │                     │
	│ delete  │ -p insufficient-storage-299212                                                                                                           │ insufficient-storage-299212 │ jenkins │ v1.37.0 │ 08 Oct 25 22:40 UTC │ 08 Oct 25 22:40 UTC │
	│ start   │ -p pause-326566 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-326566                │ jenkins │ v1.37.0 │ 08 Oct 25 22:40 UTC │ 08 Oct 25 22:41 UTC │
	│ start   │ -p missing-upgrade-336831 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-336831      │ jenkins │ v1.32.0 │ 08 Oct 25 22:40 UTC │ 08 Oct 25 22:41 UTC │
	│ start   │ -p missing-upgrade-336831 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-336831      │ jenkins │ v1.37.0 │ 08 Oct 25 22:41 UTC │ 08 Oct 25 22:42 UTC │
	│ start   │ -p pause-326566 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-326566                │ jenkins │ v1.37.0 │ 08 Oct 25 22:41 UTC │ 08 Oct 25 22:42 UTC │
	│ delete  │ -p missing-upgrade-336831                                                                                                                │ missing-upgrade-336831      │ jenkins │ v1.37.0 │ 08 Oct 25 22:42 UTC │ 08 Oct 25 22:42 UTC │
	│ start   │ -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-445308   │ jenkins │ v1.37.0 │ 08 Oct 25 22:42 UTC │                     │
	│ pause   │ -p pause-326566 --alsologtostderr -v=5                                                                                                   │ pause-326566                │ jenkins │ v1.37.0 │ 08 Oct 25 22:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:42:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:42:09.388224  142502 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:42:09.388396  142502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:42:09.388406  142502 out.go:374] Setting ErrFile to fd 2...
	I1008 22:42:09.388412  142502 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:42:09.388666  142502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:42:09.389085  142502 out.go:368] Setting JSON to false
	I1008 22:42:09.390029  142502 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5080,"bootTime":1759958250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:42:09.390098  142502 start.go:141] virtualization:  
	I1008 22:42:09.393386  142502 out.go:179] * [kubernetes-upgrade-445308] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:42:09.397360  142502 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:42:09.397525  142502 notify.go:220] Checking for updates...
	I1008 22:42:09.403539  142502 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:42:09.406539  142502 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:42:09.409510  142502 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:42:09.412426  142502 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:42:09.415290  142502 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:42:09.418706  142502 config.go:182] Loaded profile config "pause-326566": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:42:09.418861  142502 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:42:09.454765  142502 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:42:09.454933  142502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:42:09.517888  142502 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-08 22:42:09.508657289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:42:09.518004  142502 docker.go:318] overlay module found
	I1008 22:42:09.523082  142502 out.go:179] * Using the docker driver based on user configuration
	I1008 22:42:09.526060  142502 start.go:305] selected driver: docker
	I1008 22:42:09.526079  142502 start.go:925] validating driver "docker" against <nil>
	I1008 22:42:09.526141  142502 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:42:09.526950  142502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:42:09.611553  142502 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-08 22:42:09.601812412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:42:09.611728  142502 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:42:09.611948  142502 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 22:42:09.615036  142502 out.go:179] * Using Docker driver with root privileges
	I1008 22:42:09.617918  142502 cni.go:84] Creating CNI manager for ""
	I1008 22:42:09.617989  142502 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:42:09.618004  142502 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:42:09.618085  142502 start.go:349] cluster config:
	{Name:kubernetes-upgrade-445308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-445308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:42:09.621133  142502 out.go:179] * Starting "kubernetes-upgrade-445308" primary control-plane node in "kubernetes-upgrade-445308" cluster
	I1008 22:42:09.623972  142502 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:42:09.626856  142502 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:42:09.629879  142502 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:42:09.629936  142502 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1008 22:42:09.629948  142502 cache.go:58] Caching tarball of preloaded images
	I1008 22:42:09.629970  142502 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:42:09.630032  142502 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:42:09.630043  142502 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1008 22:42:09.630159  142502 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/kubernetes-upgrade-445308/config.json ...
	I1008 22:42:09.630175  142502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/kubernetes-upgrade-445308/config.json: {Name:mk5e099737877ad6104a03617eca723fba82bb27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:42:09.650363  142502 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:42:09.650389  142502 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:42:09.650409  142502 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:42:09.650432  142502 start.go:360] acquireMachinesLock for kubernetes-upgrade-445308: {Name:mk89d35ceafcd0eb1be6da2953201407a9fee31f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:42:09.650546  142502 start.go:364] duration metric: took 92.743µs to acquireMachinesLock for "kubernetes-upgrade-445308"
	I1008 22:42:09.650611  142502 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-445308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-445308 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:42:09.650676  142502 start.go:125] createHost starting for "" (driver="docker")
	W1008 22:42:06.008699  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	W1008 22:42:08.012225  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	I1008 22:42:09.654042  142502 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:42:09.654284  142502 start.go:159] libmachine.API.Create for "kubernetes-upgrade-445308" (driver="docker")
	I1008 22:42:09.654330  142502 client.go:168] LocalClient.Create starting
	I1008 22:42:09.654442  142502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:42:09.654479  142502 main.go:141] libmachine: Decoding PEM data...
	I1008 22:42:09.654503  142502 main.go:141] libmachine: Parsing certificate...
	I1008 22:42:09.654560  142502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:42:09.654584  142502 main.go:141] libmachine: Decoding PEM data...
	I1008 22:42:09.654598  142502 main.go:141] libmachine: Parsing certificate...
	I1008 22:42:09.654961  142502 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-445308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:42:09.675787  142502 cli_runner.go:211] docker network inspect kubernetes-upgrade-445308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:42:09.675883  142502 network_create.go:284] running [docker network inspect kubernetes-upgrade-445308] to gather additional debugging logs...
	I1008 22:42:09.675905  142502 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-445308
	W1008 22:42:09.693880  142502 cli_runner.go:211] docker network inspect kubernetes-upgrade-445308 returned with exit code 1
	I1008 22:42:09.693913  142502 network_create.go:287] error running [docker network inspect kubernetes-upgrade-445308]: docker network inspect kubernetes-upgrade-445308: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-445308 not found
	I1008 22:42:09.693927  142502 network_create.go:289] output of [docker network inspect kubernetes-upgrade-445308]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-445308 not found
	
	** /stderr **
	I1008 22:42:09.694034  142502 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:42:09.710829  142502 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:42:09.711159  142502 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:42:09.711441  142502 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:42:09.711754  142502 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a8574667d98c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c2:b4:86:97:e5:85} reservation:<nil>}
	I1008 22:42:09.712171  142502 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001aaeb70}
	I1008 22:42:09.712196  142502 network_create.go:124] attempt to create docker network kubernetes-upgrade-445308 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 22:42:09.712267  142502 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 kubernetes-upgrade-445308
	I1008 22:42:09.780315  142502 network_create.go:108] docker network kubernetes-upgrade-445308 192.168.85.0/24 created
	I1008 22:42:09.780350  142502 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-445308" container
	I1008 22:42:09.780433  142502 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:42:09.796984  142502 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-445308 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:42:09.815837  142502 oci.go:103] Successfully created a docker volume kubernetes-upgrade-445308
	I1008 22:42:09.815922  142502 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-445308-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --entrypoint /usr/bin/test -v kubernetes-upgrade-445308:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:42:10.410510  142502 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-445308
	I1008 22:42:10.410554  142502 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:42:10.410574  142502 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:42:10.410660  142502 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-445308:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	W1008 22:42:10.017334  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	W1008 22:42:12.020080  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	W1008 22:42:14.507448  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	I1008 22:42:16.297120  142502 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-445308:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (5.886421136s)
	I1008 22:42:16.297154  142502 kic.go:203] duration metric: took 5.886576527s to extract preloaded images to volume ...
	W1008 22:42:16.297298  142502 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:42:16.297412  142502 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:42:16.348617  142502 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-445308 --name kubernetes-upgrade-445308 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-445308 --network kubernetes-upgrade-445308 --ip 192.168.85.2 --volume kubernetes-upgrade-445308:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:42:16.631077  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Running}}
	I1008 22:42:16.654221  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Status}}
	I1008 22:42:16.679079  142502 cli_runner.go:164] Run: docker exec kubernetes-upgrade-445308 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:42:16.732253  142502 oci.go:144] the created container "kubernetes-upgrade-445308" has a running status.
	I1008 22:42:16.732285  142502 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa...
	I1008 22:42:17.626230  142502 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:42:17.647238  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Status}}
	I1008 22:42:17.666172  142502 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:42:17.666194  142502 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-445308 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:42:17.708058  142502 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-445308 --format={{.State.Status}}
	I1008 22:42:17.724886  142502 machine.go:93] provisionDockerMachine start ...
	I1008 22:42:17.724989  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:17.741378  142502 main.go:141] libmachine: Using SSH client type: native
	I1008 22:42:17.741772  142502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32991 <nil> <nil>}
	I1008 22:42:17.741787  142502 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:42:17.742461  142502 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1008 22:42:17.014560  138960 pod_ready.go:104] pod "coredns-66bc5c9577-c6ps2" is not "Ready", error: node "pause-326566" hosting pod "coredns-66bc5c9577-c6ps2" is not "Ready" (will retry)
	I1008 22:42:18.508167  138960 pod_ready.go:94] pod "coredns-66bc5c9577-c6ps2" is "Ready"
	I1008 22:42:18.508239  138960 pod_ready.go:86] duration metric: took 19.506745783s for pod "coredns-66bc5c9577-c6ps2" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.514774  138960 pod_ready.go:83] waiting for pod "etcd-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.522176  138960 pod_ready.go:94] pod "etcd-pause-326566" is "Ready"
	I1008 22:42:18.522198  138960 pod_ready.go:86] duration metric: took 7.404349ms for pod "etcd-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.525479  138960 pod_ready.go:83] waiting for pod "kube-apiserver-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.532146  138960 pod_ready.go:94] pod "kube-apiserver-pause-326566" is "Ready"
	I1008 22:42:18.532171  138960 pod_ready.go:86] duration metric: took 6.629514ms for pod "kube-apiserver-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.535365  138960 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.705732  138960 pod_ready.go:94] pod "kube-controller-manager-pause-326566" is "Ready"
	I1008 22:42:18.705761  138960 pod_ready.go:86] duration metric: took 170.374243ms for pod "kube-controller-manager-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:18.906179  138960 pod_ready.go:83] waiting for pod "kube-proxy-vs6x9" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.305149  138960 pod_ready.go:94] pod "kube-proxy-vs6x9" is "Ready"
	I1008 22:42:19.305177  138960 pod_ready.go:86] duration metric: took 398.969336ms for pod "kube-proxy-vs6x9" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.505275  138960 pod_ready.go:83] waiting for pod "kube-scheduler-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.905586  138960 pod_ready.go:94] pod "kube-scheduler-pause-326566" is "Ready"
	I1008 22:42:19.905616  138960 pod_ready.go:86] duration metric: took 400.311734ms for pod "kube-scheduler-pause-326566" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:42:19.905651  138960 pod_ready.go:40] duration metric: took 20.908439429s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:42:19.960011  138960 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:42:19.963040  138960 out.go:179] * Done! kubectl is now configured to use "pause-326566" cluster and "default" namespace by default
	I1008 22:42:20.889287  142502 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-445308
	
	I1008 22:42:20.889313  142502 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-445308"
	I1008 22:42:20.889384  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:20.908171  142502 main.go:141] libmachine: Using SSH client type: native
	I1008 22:42:20.908543  142502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32991 <nil> <nil>}
	I1008 22:42:20.908560  142502 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-445308 && echo "kubernetes-upgrade-445308" | sudo tee /etc/hostname
	I1008 22:42:21.106724  142502 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-445308
	
	I1008 22:42:21.106838  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:21.136136  142502 main.go:141] libmachine: Using SSH client type: native
	I1008 22:42:21.136449  142502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32991 <nil> <nil>}
	I1008 22:42:21.136473  142502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-445308' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-445308/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-445308' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:42:21.286240  142502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:42:21.286269  142502 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:42:21.286307  142502 ubuntu.go:190] setting up certificates
	I1008 22:42:21.286318  142502 provision.go:84] configureAuth start
	I1008 22:42:21.286379  142502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-445308
	I1008 22:42:21.304390  142502 provision.go:143] copyHostCerts
	I1008 22:42:21.304466  142502 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:42:21.304488  142502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:42:21.304573  142502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:42:21.304676  142502 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:42:21.304686  142502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:42:21.304716  142502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:42:21.304785  142502 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:42:21.304795  142502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:42:21.304821  142502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:42:21.304883  142502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-445308 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-445308 localhost minikube]
	I1008 22:42:21.398725  142502 provision.go:177] copyRemoteCerts
	I1008 22:42:21.398801  142502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:42:21.398846  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:21.417911  142502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32991 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa Username:docker}
	I1008 22:42:21.521297  142502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:42:21.538933  142502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1008 22:42:21.556940  142502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:42:21.574788  142502 provision.go:87] duration metric: took 288.444099ms to configureAuth
	I1008 22:42:21.574820  142502 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:42:21.575010  142502 config.go:182] Loaded profile config "kubernetes-upgrade-445308": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:42:21.575118  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:21.592401  142502 main.go:141] libmachine: Using SSH client type: native
	I1008 22:42:21.592699  142502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32991 <nil> <nil>}
	I1008 22:42:21.592724  142502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:42:21.996701  142502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:42:21.996730  142502 machine.go:96] duration metric: took 4.271818057s to provisionDockerMachine
	I1008 22:42:21.996740  142502 client.go:171] duration metric: took 12.342397874s to LocalClient.Create
	I1008 22:42:21.996754  142502 start.go:167] duration metric: took 12.34247177s to libmachine.API.Create "kubernetes-upgrade-445308"
	I1008 22:42:21.996789  142502 start.go:293] postStartSetup for "kubernetes-upgrade-445308" (driver="docker")
	I1008 22:42:21.996810  142502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:42:21.996895  142502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:42:21.996975  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:22.028954  142502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32991 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa Username:docker}
	I1008 22:42:22.138849  142502 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:42:22.143606  142502 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:42:22.143639  142502 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:42:22.143651  142502 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:42:22.143711  142502 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:42:22.143808  142502 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:42:22.143912  142502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:42:22.155219  142502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:42:22.188760  142502 start.go:296] duration metric: took 191.947752ms for postStartSetup
	I1008 22:42:22.189160  142502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-445308
	I1008 22:42:22.211826  142502 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/kubernetes-upgrade-445308/config.json ...
	I1008 22:42:22.212104  142502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:42:22.212151  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:22.245702  142502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32991 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa Username:docker}
	I1008 22:42:22.350498  142502 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:42:22.358873  142502 start.go:128] duration metric: took 12.708182045s to createHost
	I1008 22:42:22.358895  142502 start.go:83] releasing machines lock for "kubernetes-upgrade-445308", held for 12.708336024s
	I1008 22:42:22.358967  142502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-445308
	I1008 22:42:22.380359  142502 ssh_runner.go:195] Run: cat /version.json
	I1008 22:42:22.380410  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:22.380697  142502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:42:22.380747  142502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-445308
	I1008 22:42:22.414380  142502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32991 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa Username:docker}
	I1008 22:42:22.450801  142502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32991 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/kubernetes-upgrade-445308/id_rsa Username:docker}
	I1008 22:42:22.623191  142502 ssh_runner.go:195] Run: systemctl --version
	I1008 22:42:22.631018  142502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:42:22.696972  142502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:42:22.703730  142502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:42:22.703817  142502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:42:22.758646  142502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:42:22.758673  142502 start.go:495] detecting cgroup driver to use...
	I1008 22:42:22.758723  142502 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:42:22.758811  142502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:42:22.784823  142502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:42:22.806160  142502 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:42:22.806246  142502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:42:22.827419  142502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:42:22.848807  142502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:42:23.019611  142502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:42:23.184410  142502 docker.go:234] disabling docker service ...
	I1008 22:42:23.184479  142502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:42:23.212140  142502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:42:23.228575  142502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:42:23.386605  142502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:42:23.540817  142502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:42:23.556305  142502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:42:23.579610  142502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1008 22:42:23.579674  142502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:42:23.590979  142502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:42:23.591040  142502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:42:23.602326  142502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:42:23.612668  142502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:42:23.623579  142502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:42:23.635364  142502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:42:23.646327  142502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:42:23.667491  142502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:42:23.684139  142502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:42:23.694414  142502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:42:23.703173  142502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:42:23.844000  142502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:42:23.988686  142502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:42:23.988767  142502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:42:23.993120  142502 start.go:563] Will wait 60s for crictl version
	I1008 22:42:23.993196  142502 ssh_runner.go:195] Run: which crictl
	I1008 22:42:23.997465  142502 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:42:24.027257  142502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:42:24.027373  142502 ssh_runner.go:195] Run: crio --version
	I1008 22:42:24.063109  142502 ssh_runner.go:195] Run: crio --version
	I1008 22:42:24.102939  142502 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1008 22:42:24.106809  142502 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-445308 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:42:24.131648  142502 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:42:24.135666  142502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:42:24.148077  142502 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-445308 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-445308 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:42:24.148180  142502 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:42:24.148245  142502 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:42:24.207535  142502 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:42:24.207554  142502 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:42:24.207613  142502 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:42:24.250106  142502 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:42:24.250130  142502 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:42:24.250138  142502 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1008 22:42:24.250226  142502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-445308 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-445308 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:42:24.250305  142502 ssh_runner.go:195] Run: crio config
	I1008 22:42:24.336602  142502 cni.go:84] Creating CNI manager for ""
	I1008 22:42:24.336629  142502 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:42:24.336643  142502 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:42:24.336667  142502 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-445308 NodeName:kubernetes-upgrade-445308 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:42:24.336806  142502 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-445308"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:42:24.336877  142502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1008 22:42:24.347089  142502 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:42:24.347174  142502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:42:24.358601  142502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1008 22:42:24.374547  142502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	
	
	==> CRI-O <==
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.842067009Z" level=info msg="Starting container: 42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335" id=026f9490-da76-491a-8f25-94952770d77c name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.844896875Z" level=info msg="Started container" PID=2148 containerID=55079340c4801100ffcea067eaee3412c383d55dcc8eb14ea741569d0e165dba description=kube-system/kube-apiserver-pause-326566/kube-apiserver id=53e6fefd-b273-43f8-89f0-9966db57ea1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a087307333c11cd5e9bca9b37a49999b60d3e4397d7b989c11f7b207d79c7bfa
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.845756462Z" level=info msg="Started container" PID=2159 containerID=a638f3395b215269fab29db65cadc1f832167d5f462cd7bf74a958baec1ff1f0 description=kube-system/kindnet-blfz9/kindnet-cni id=ce048d48-e66f-49e0-9681-3f301517b7d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a3f53cde70909be6abfd2454cbfa546f5dcee6d0b46320ab71cc30573f0025
	Oct 08 22:41:49 pause-326566 crio[2053]: time="2025-10-08T22:41:49.851333049Z" level=info msg="Started container" PID=2167 containerID=42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335 description=kube-system/kube-scheduler-pause-326566/kube-scheduler id=026f9490-da76-491a-8f25-94952770d77c name=/runtime.v1.RuntimeService/StartContainer sandboxID=537eae1c9877664f517df87e85bb1a338eabb6fc8db148cebc1b9a73595ab084
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.21315355Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=979c79a8-5035-407b-9330-cc02c7f14523 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.214447217Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=ba08337c-3c6f-4a98-9987-8c043fb8abbc name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.217903617Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-c6ps2/coredns" id=1091b0cb-7660-4027-aa90-924c47fd300d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.218257689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.230734972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.231683553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.328043305Z" level=info msg="Created container 305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90: kube-system/coredns-66bc5c9577-c6ps2/coredns" id=1091b0cb-7660-4027-aa90-924c47fd300d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.333616241Z" level=info msg="Starting container: 305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90" id=82038980-c403-428c-8852-527be30a5866 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:41:55 pause-326566 crio[2053]: time="2025-10-08T22:41:55.335500816Z" level=info msg="Started container" PID=2440 containerID=305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90 description=kube-system/coredns-66bc5c9577-c6ps2/coredns id=82038980-c403-428c-8852-527be30a5866 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8379f968ba0a0a6d89d850f8884ddbf1b4c41c155cc3e9e3b5c5ebca73dcdf85
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.341777356Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.355924221Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.356146747Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.357484255Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.374587986Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.374779455Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.374869975Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.394078476Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.394267992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.394354573Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.399345275Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:42:00 pause-326566 crio[2053]: time="2025-10-08T22:42:00.399552629Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	305f705616ea5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   30 seconds ago       Running             coredns                   1                   8379f968ba0a0       coredns-66bc5c9577-c6ps2               kube-system
	a638f3395b215       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   36 seconds ago       Running             kindnet-cni               1                   02a3f53cde709       kindnet-blfz9                          kube-system
	42ed69aa2376e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   36 seconds ago       Running             kube-scheduler            1                   537eae1c98776       kube-scheduler-pause-326566            kube-system
	55079340c4801       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   36 seconds ago       Running             kube-apiserver            1                   a087307333c11       kube-apiserver-pause-326566            kube-system
	ac7fc70b72c25       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   36 seconds ago       Running             etcd                      1                   3f32493492af3       etcd-pause-326566                      kube-system
	9c97602a1b914       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   36 seconds ago       Running             kube-proxy                1                   1028fa8618383       kube-proxy-vs6x9                       kube-system
	5f550cc3256d6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   36 seconds ago       Running             kube-controller-manager   1                   02a84dd8eb9da       kube-controller-manager-pause-326566   kube-system
	85a40f569ce3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   49 seconds ago       Exited              coredns                   0                   8379f968ba0a0       coredns-66bc5c9577-c6ps2               kube-system
	5d475c9e5e584       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   1028fa8618383       kube-proxy-vs6x9                       kube-system
	ae50592c09b35       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   02a3f53cde709       kindnet-blfz9                          kube-system
	e6b64e927a5ef       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   537eae1c98776       kube-scheduler-pause-326566            kube-system
	fd4fdab202d98       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   3f32493492af3       etcd-pause-326566                      kube-system
	e957cf9693b4b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   02a84dd8eb9da       kube-controller-manager-pause-326566   kube-system
	874596efe54cd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   a087307333c11       kube-apiserver-pause-326566            kube-system
	
	
	==> coredns [305f705616ea5f5eba7a36719b3ffb8c8a6cdd88b79a00f94bd97fea0de39d90] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38224 - 25536 "HINFO IN 3626398726629763703.4107288149810172931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016400289s
	
	
	==> coredns [85a40f569ce3eca8891a77785f3d9bfabe54a45c3f44e307f90d27d4713ffb05] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59816 - 24994 "HINFO IN 8869420070307608886.5237573492754804969. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044203978s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-326566
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-326566
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=pause-326566
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_40_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-326566
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:42:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:40:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:40:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:40:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:42:18 +0000   Wed, 08 Oct 2025 22:42:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-326566
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 56d89e5d28c749a7bec21082e2dc2094
	  System UUID:                2cc337a4-fb8e-4954-9852-59a8a41e72ee
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-c6ps2                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     91s
	  kube-system                 etcd-pause-326566                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         99s
	  kube-system                 kindnet-blfz9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-pause-326566             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-326566    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-vs6x9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-pause-326566             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 90s                  kube-proxy       
	  Normal   Starting                 28s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  107s (x8 over 108s)  kubelet          Node pause-326566 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s (x8 over 108s)  kubelet          Node pause-326566 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x8 over 108s)  kubelet          Node pause-326566 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 97s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  97s                  kubelet          Node pause-326566 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s                  kubelet          Node pause-326566 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s                  kubelet          Node pause-326566 status is now: NodeHasSufficientPID
	  Normal   Starting                 97s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           93s                  node-controller  Node pause-326566 event: Registered Node pause-326566 in Controller
	  Warning  ContainerGCFailed        37s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             36s                  kubelet          Node pause-326566 status is now: NodeNotReady
	  Normal   RegisteredNode           25s                  node-controller  Node pause-326566 event: Registered Node pause-326566 in Controller
	  Normal   NodeReady                8s (x2 over 50s)     kubelet          Node pause-326566 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:17] overlayfs: idmapped layers are currently not supported
	[  +3.473782] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:18] overlayfs: idmapped layers are currently not supported
	[ +40.002132] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:19] overlayfs: idmapped layers are currently not supported
	[  +3.771758] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:20] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:21] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:22] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:27] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ac7fc70b72c25acddce469364720ff480418c83f06d50fae2989fdc64c174ae9] <==
	{"level":"warn","ts":"2025-10-08T22:41:53.580002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.646361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.758279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.766552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.817817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.876037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.895431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.945210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:53.991771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.058473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.069476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.113974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.150423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.190189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.223993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.300526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.344584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.432785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.484413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.514693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.606620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.644043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.731690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.771133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:41:54.869174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54134","server-name":"","error":"EOF"}
	
	
	==> etcd [fd4fdab202d98a69cef0468b988f6f07eafd55e1d522df910f09362c61f70214] <==
	{"level":"warn","ts":"2025-10-08T22:40:44.745322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.759386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.781533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.806287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.845166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.848151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:40:44.951222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-08T22:41:42.160361Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-08T22:41:42.160419Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-326566","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-08T22:41:42.160541Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-08T22:41:44.879610Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-08T22:41:44.879691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:41:44.879714Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-08T22:41:44.879818Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-08T22:41:44.879838Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880071Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880107Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-08T22:41:44.880116Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880166Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-08T22:41:44.880179Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-08T22:41:44.880186Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:41:44.883293Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-08T22:41:44.883371Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-08T22:41:44.883404Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-08T22:41:44.883412Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-326566","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 22:42:26 up  1:24,  0 user,  load average: 3.44, 2.10, 1.83
	Linux pause-326566 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a638f3395b215269fab29db65cadc1f832167d5f462cd7bf74a958baec1ff1f0] <==
	I1008 22:41:50.013618       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:41:50.023493       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1008 22:41:50.023629       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:41:50.023642       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:41:50.023653       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:41:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:41:50.341421       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:41:50.341514       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:41:50.341550       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:41:50.345410       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1008 22:41:57.646308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:41:57.646386       1 metrics.go:72] Registering metrics
	I1008 22:41:57.646453       1 controller.go:711] "Syncing nftables rules"
	I1008 22:42:00.341278       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:42:00.341376       1 main.go:301] handling current node
	I1008 22:42:10.341855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:42:10.341888       1 main.go:301] handling current node
	I1008 22:42:20.341217       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:42:20.341283       1 main.go:301] handling current node
	
	
	==> kindnet [ae50592c09b35c2007c94b7f04f51edf308e741400b22ae3b3ab8f45411d783c] <==
	I1008 22:40:55.721682       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:40:55.722105       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1008 22:40:55.722235       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:40:55.722246       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:40:55.722256       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:40:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:40:56.017121       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:40:56.025926       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:40:56.025984       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:40:56.028140       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:41:26.017413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:41:26.017413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:41:26.022087       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1008 22:41:26.028804       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1008 22:41:27.228649       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:41:27.228682       1 metrics.go:72] Registering metrics
	I1008 22:41:27.228756       1 controller.go:711] "Syncing nftables rules"
	I1008 22:41:36.019235       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:41:36.019299       1 main.go:301] handling current node
	
	
	==> kube-apiserver [55079340c4801100ffcea067eaee3412c383d55dcc8eb14ea741569d0e165dba] <==
	I1008 22:41:57.471181       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1008 22:41:57.477960       1 policy_source.go:240] refreshing policies
	I1008 22:41:57.506379       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 22:41:57.509924       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 22:41:57.510049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 22:41:57.512690       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1008 22:41:57.522839       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:41:57.530226       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:41:57.546110       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 22:41:57.546234       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 22:41:57.546260       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 22:41:57.546267       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 22:41:57.546378       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 22:41:57.602192       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 22:41:57.609767       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 22:41:57.624410       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 22:41:57.631493       1 cache.go:39] Caches are synced for autoregister controller
	E1008 22:41:57.656968       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 22:41:57.677279       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:41:59.685417       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:42:01.316957       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:42:01.347920       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 22:42:01.376277       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:42:01.517958       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 22:42:01.567352       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [874596efe54cd8d210100aedef1c06813d435d5cb3aa7beb1ddc5d46acc2129d] <==
	W1008 22:41:43.186488       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186604       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186718       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186800       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186841       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.186886       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.187106       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.189545       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.190772       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.191045       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.191098       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.191251       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193688       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193771       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193688       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.193815       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199282       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199426       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199593       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199778       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199829       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199795       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199918       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.199958       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1008 22:41:43.202457       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5f550cc3256d69cf0a87aff890ec252d743b03a23efa555edab77b39192914f0] <==
	I1008 22:42:01.190956       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 22:42:01.201112       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1008 22:42:01.208331       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:42:01.210399       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 22:42:01.210977       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:42:01.212493       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 22:42:01.217706       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 22:42:01.217878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:42:01.218582       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 22:42:01.225850       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 22:42:01.226095       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:42:01.226368       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:42:01.226481       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-326566"
	I1008 22:42:01.226551       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 22:42:01.226608       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 22:42:01.229809       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1008 22:42:01.246820       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 22:42:01.262903       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 22:42:01.262994       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 22:42:01.275106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:42:01.275338       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:42:01.275385       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:42:01.275234       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:42:01.275307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:42:21.229512       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [e957cf9693b4bae8678cbf4b0eb2f02a61ff250134c2dce2b23a883229c58f85] <==
	I1008 22:40:53.891640       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 22:40:53.892574       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 22:40:53.892886       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 22:40:53.909886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:40:53.910090       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 22:40:53.911027       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 22:40:53.911239       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 22:40:53.932693       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:40:53.932862       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:40:53.932903       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:40:53.932830       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1008 22:40:53.932842       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 22:40:53.932794       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 22:40:53.932815       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 22:40:53.933227       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:40:53.937743       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:40:53.937852       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 22:40:53.937927       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 22:40:53.943559       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 22:40:53.943648       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1008 22:40:53.963035       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 22:40:53.982724       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 22:40:53.982845       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1008 22:40:53.989715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:41:38.897945       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5d475c9e5e584ad9eb05a2319b77f94d21c2a22eaf77a8598a1f5dedf1846050] <==
	I1008 22:40:55.955152       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:40:56.095445       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:40:56.220727       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:40:56.220831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1008 22:40:56.220944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:40:56.245988       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:40:56.246110       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:40:56.251200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:40:56.252131       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:40:56.252209       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:40:56.257133       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:40:56.257211       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:40:56.257705       1 config.go:200] "Starting service config controller"
	I1008 22:40:56.257765       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:40:56.258166       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:40:56.262168       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:40:56.259569       1 config.go:309] "Starting node config controller"
	I1008 22:40:56.262313       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:40:56.263840       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:40:56.357510       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:40:56.358725       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:40:56.369777       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [9c97602a1b914b0bec562d1ab31e684e86b9cba84e0a222d12a95a6bf582b626] <==
	I1008 22:41:53.837389       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:41:55.087157       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:41:57.625968       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:41:57.625996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1008 22:41:57.626073       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:41:57.993330       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:41:57.993451       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:41:58.085232       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:41:58.085655       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:41:58.085981       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:41:58.087984       1 config.go:200] "Starting service config controller"
	I1008 22:41:58.088055       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:41:58.088103       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:41:58.088130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:41:58.088173       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:41:58.088200       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:41:58.132749       1 config.go:309] "Starting node config controller"
	I1008 22:41:58.141965       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:41:58.191574       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 22:41:58.191919       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:41:58.191948       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:41:58.249046       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [42ed69aa2376eefaa33d776f8616c4b8a26001a733e75649403b32a07a1ba335] <==
	I1008 22:41:57.329945       1 serving.go:386] Generated self-signed cert in-memory
	I1008 22:42:00.939948       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 22:42:00.939987       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:42:00.947802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 22:42:00.947997       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 22:42:00.948051       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 22:42:00.948101       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 22:42:00.963404       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:42:00.963571       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:42:00.963624       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:42:00.963656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:42:01.056733       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 22:42:01.064624       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:42:01.065782       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [e6b64e927a5ef789afa398e08b5e4229bcbefde1093a6a6b5439195f7bfa1789] <==
	E1008 22:40:47.036465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 22:40:47.036534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 22:40:47.036573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 22:40:47.036666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:40:47.036718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 22:40:47.036771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 22:40:47.036834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 22:40:47.036880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 22:40:47.036930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 22:40:47.036973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 22:40:47.037022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 22:40:47.039705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 22:40:47.039766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 22:40:47.039814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 22:40:47.039888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 22:40:47.039922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 22:40:47.048164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:40:48.075223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1008 22:40:49.880993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:41:42.184988       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1008 22:41:42.185025       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1008 22:41:42.185060       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1008 22:41:42.185113       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:41:42.222298       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1008 22:41:42.222426       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.490833    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="331b114b916326e1fcaba026d0192f8e" pod="kube-system/kube-controller-manager-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.499245    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-c6ps2" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.500917    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="331b114b916326e1fcaba026d0192f8e" pod="kube-system/kube-controller-manager-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.502147    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-blfz9\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="e5a1288e-4110-49f9-85bf-0f418d80b6b2" pod="kube-system/kindnet-blfz9"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.502872    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vs6x9\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1acc7ed2-b8d7-43e3-bbad-880f4ad69813" pod="kube-system/kube-proxy-vs6x9"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.503484    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-c6ps2\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990" pod="kube-system/coredns-66bc5c9577-c6ps2"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.504042    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="49fd2db924ccd3cab82fddf4c055bc87" pod="kube-system/kube-scheduler-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.504613    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d2e64967bfbc02363d2b3b85b86949a" pod="kube-system/etcd-pause-326566"
	Oct 08 22:41:49 pause-326566 kubelet[1300]: E1008 22:41:49.505321    1300 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-326566\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="fb81b5471683d9ef68fe924f3df9ead2" pod="kube-system/kube-apiserver-pause-326566"
	Oct 08 22:41:50 pause-326566 kubelet[1300]: I1008 22:41:50.670266    1300 setters.go:543] "Node became not ready" node="pause-326566" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-08T22:41:50Z","lastTransitionTime":"2025-10-08T22:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Oct 08 22:41:51 pause-326566 kubelet[1300]: E1008 22:41:51.217265    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-c6ps2" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990"
	Oct 08 22:41:53 pause-326566 kubelet[1300]: E1008 22:41:53.209943    1300 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-c6ps2" podUID="08d0f5d9-3b28-4b70-a497-45c06e51b990"
	Oct 08 22:41:55 pause-326566 kubelet[1300]: I1008 22:41:55.210623    1300 scope.go:117] "RemoveContainer" containerID="85a40f569ce3eca8891a77785f3d9bfabe54a45c3f44e307f90d27d4713ffb05"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153413    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="49fd2db924ccd3cab82fddf4c055bc87" pod="kube-system/kube-scheduler-pause-326566"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153743    1300 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-326566\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153795    1300 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-326566\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.153821    1300 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-326566\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.288025    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="9d2e64967bfbc02363d2b3b85b86949a" pod="kube-system/etcd-pause-326566"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.464621    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="fb81b5471683d9ef68fe924f3df9ead2" pod="kube-system/kube-apiserver-pause-326566"
	Oct 08 22:41:57 pause-326566 kubelet[1300]: E1008 22:41:57.495297    1300 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-326566\" is forbidden: User \"system:node:pause-326566\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-326566' and this object" podUID="331b114b916326e1fcaba026d0192f8e" pod="kube-system/kube-controller-manager-pause-326566"
	Oct 08 22:41:59 pause-326566 kubelet[1300]: W1008 22:41:59.528969    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 08 22:42:09 pause-326566 kubelet[1300]: W1008 22:42:09.551160    1300 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 08 22:42:20 pause-326566 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 22:42:20 pause-326566 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 22:42:20 pause-326566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-326566 -n pause-326566
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-326566 -n pause-326566: exit status 2 (490.000036ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-326566 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.129522ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:54:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-110407 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-110407 describe deploy/metrics-server -n kube-system: exit status 1 (79.522025ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-110407 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-110407
helpers_test.go:243: (dbg) docker inspect old-k8s-version-110407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04",
	        "Created": "2025-10-08T22:53:24.5168981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 178055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:53:24.595029255Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/hostname",
	        "HostsPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/hosts",
	        "LogPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04-json.log",
	        "Name": "/old-k8s-version-110407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04",
	                "LowerDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110407",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110407",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fb848109e7c65df63fc9434988f59334224dac8d74b9042377c1a1edc60b3f3",
	            "SandboxKey": "/var/run/docker/netns/3fb848109e7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:fe:cf:7c:14:a4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed0b9760a08ed8f2576688b000be4aceb5b3090420383440e59b46e430cff699",
	                    "EndpointID": "dfbc86b60065ba378ace416b19631a1f27ac5e878afc9f7ee923031ec5dcf294",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-110407",
	                        "164acd06879a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-110407 logs -n 25
E1008 22:54:29.998297    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-110407 logs -n 25: (1.213067425s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-840929 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo containerd config dump                                                                                                                                                                                                  │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo crio config                                                                                                                                                                                                             │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ delete  │ -p cilium-840929                                                                                                                                                                                                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:45 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:46 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ delete  │ -p cert-expiration-292528                                                                                                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	│ delete  │ -p force-systemd-env-092546                                                                                                                                                                                                                   │ force-systemd-env-092546  │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:52 UTC │
	│ start   │ -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ cert-options-378019 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ -p cert-options-378019 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:53:18
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:53:18.785890  177660 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:53:18.786018  177660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:53:18.786029  177660 out.go:374] Setting ErrFile to fd 2...
	I1008 22:53:18.786035  177660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:53:18.786288  177660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:53:18.786696  177660 out.go:368] Setting JSON to false
	I1008 22:53:18.787591  177660 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5749,"bootTime":1759958250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:53:18.787657  177660 start.go:141] virtualization:  
	I1008 22:53:18.791120  177660 out.go:179] * [old-k8s-version-110407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:53:18.795399  177660 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:53:18.795472  177660 notify.go:220] Checking for updates...
	I1008 22:53:18.801838  177660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:53:18.805152  177660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:53:18.808325  177660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:53:18.811368  177660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:53:18.814389  177660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:53:18.818155  177660 config.go:182] Loaded profile config "force-systemd-flag-385382": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:53:18.818305  177660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:53:18.839328  177660 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:53:18.839449  177660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:53:18.897419  177660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:53:18.888227917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:53:18.897532  177660 docker.go:318] overlay module found
	I1008 22:53:18.900772  177660 out.go:179] * Using the docker driver based on user configuration
	I1008 22:53:18.903800  177660 start.go:305] selected driver: docker
	I1008 22:53:18.903821  177660 start.go:925] validating driver "docker" against <nil>
	I1008 22:53:18.903858  177660 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:53:18.904592  177660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:53:18.956905  177660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:53:18.948018125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:53:18.957058  177660 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:53:18.957301  177660 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:53:18.960268  177660 out.go:179] * Using Docker driver with root privileges
	I1008 22:53:18.963164  177660 cni.go:84] Creating CNI manager for ""
	I1008 22:53:18.963237  177660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:53:18.963252  177660 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:53:18.963348  177660 start.go:349] cluster config:
	{Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:53:18.968322  177660 out.go:179] * Starting "old-k8s-version-110407" primary control-plane node in "old-k8s-version-110407" cluster
	I1008 22:53:18.971185  177660 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:53:18.974202  177660 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:53:18.977035  177660 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:53:18.977104  177660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1008 22:53:18.977122  177660 cache.go:58] Caching tarball of preloaded images
	I1008 22:53:18.977121  177660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:53:18.977230  177660 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:53:18.977244  177660 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1008 22:53:18.977374  177660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/config.json ...
	I1008 22:53:18.977402  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/config.json: {Name:mk19cc325e54edaa63da82b3bc0e0d4b5d25e688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:18.996513  177660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:53:18.996542  177660 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:53:18.996563  177660 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:53:18.996589  177660 start.go:360] acquireMachinesLock for old-k8s-version-110407: {Name:mkbaacf9b00bd8ee87fd567c565e6e2b19f705c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:53:18.996692  177660 start.go:364] duration metric: took 83.865µs to acquireMachinesLock for "old-k8s-version-110407"
	I1008 22:53:18.996724  177660 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:53:18.996806  177660 start.go:125] createHost starting for "" (driver="docker")
	I1008 22:53:19.000259  177660 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:53:19.000477  177660 start.go:159] libmachine.API.Create for "old-k8s-version-110407" (driver="docker")
	I1008 22:53:19.000521  177660 client.go:168] LocalClient.Create starting
	I1008 22:53:19.000587  177660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:53:19.000632  177660 main.go:141] libmachine: Decoding PEM data...
	I1008 22:53:19.000655  177660 main.go:141] libmachine: Parsing certificate...
	I1008 22:53:19.000714  177660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:53:19.000736  177660 main.go:141] libmachine: Decoding PEM data...
	I1008 22:53:19.000749  177660 main.go:141] libmachine: Parsing certificate...
	I1008 22:53:19.001129  177660 cli_runner.go:164] Run: docker network inspect old-k8s-version-110407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:53:19.018422  177660 cli_runner.go:211] docker network inspect old-k8s-version-110407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:53:19.018503  177660 network_create.go:284] running [docker network inspect old-k8s-version-110407] to gather additional debugging logs...
	I1008 22:53:19.018524  177660 cli_runner.go:164] Run: docker network inspect old-k8s-version-110407
	W1008 22:53:19.034050  177660 cli_runner.go:211] docker network inspect old-k8s-version-110407 returned with exit code 1
	I1008 22:53:19.034079  177660 network_create.go:287] error running [docker network inspect old-k8s-version-110407]: docker network inspect old-k8s-version-110407: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-110407 not found
	I1008 22:53:19.034094  177660 network_create.go:289] output of [docker network inspect old-k8s-version-110407]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-110407 not found
	
	** /stderr **
	I1008 22:53:19.034260  177660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:53:19.050820  177660 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:53:19.051253  177660 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:53:19.051550  177660 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:53:19.051784  177660 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-94ec01d43e41 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:6d:06:9b:60:31} reservation:<nil>}
	I1008 22:53:19.052199  177660 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a07fc0}
	I1008 22:53:19.052227  177660 network_create.go:124] attempt to create docker network old-k8s-version-110407 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 22:53:19.052288  177660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-110407 old-k8s-version-110407
	I1008 22:53:19.111404  177660 network_create.go:108] docker network old-k8s-version-110407 192.168.85.0/24 created
	I1008 22:53:19.111439  177660 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-110407" container
	I1008 22:53:19.111509  177660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:53:19.127971  177660 cli_runner.go:164] Run: docker volume create old-k8s-version-110407 --label name.minikube.sigs.k8s.io=old-k8s-version-110407 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:53:19.146012  177660 oci.go:103] Successfully created a docker volume old-k8s-version-110407
	I1008 22:53:19.146107  177660 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-110407-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-110407 --entrypoint /usr/bin/test -v old-k8s-version-110407:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:53:19.668883  177660 oci.go:107] Successfully prepared a docker volume old-k8s-version-110407
	I1008 22:53:19.668939  177660 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:53:19.668959  177660 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:53:19.669028  177660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-110407:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:53:24.435087  177660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-110407:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.766021188s)
	I1008 22:53:24.435117  177660 kic.go:203] duration metric: took 4.766154179s to extract preloaded images to volume ...
	W1008 22:53:24.435251  177660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:53:24.435369  177660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:53:24.501744  177660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-110407 --name old-k8s-version-110407 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-110407 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-110407 --network old-k8s-version-110407 --ip 192.168.85.2 --volume old-k8s-version-110407:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:53:24.824919  177660 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Running}}
	I1008 22:53:24.846831  177660 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:53:24.868585  177660 cli_runner.go:164] Run: docker exec old-k8s-version-110407 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:53:24.919562  177660 oci.go:144] the created container "old-k8s-version-110407" has a running status.
	I1008 22:53:24.919619  177660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa...
	I1008 22:53:25.903974  177660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:53:25.923491  177660 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:53:25.940698  177660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:53:25.940719  177660 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-110407 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:53:25.981857  177660 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:53:26.001784  177660 machine.go:93] provisionDockerMachine start ...
	I1008 22:53:26.001885  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:26.021808  177660 main.go:141] libmachine: Using SSH client type: native
	I1008 22:53:26.022154  177660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1008 22:53:26.022170  177660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:53:26.022868  177660 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:53:29.169055  177660 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110407
	
	I1008 22:53:29.169082  177660 ubuntu.go:182] provisioning hostname "old-k8s-version-110407"
	I1008 22:53:29.169142  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:29.186580  177660 main.go:141] libmachine: Using SSH client type: native
	I1008 22:53:29.186895  177660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1008 22:53:29.186912  177660 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-110407 && echo "old-k8s-version-110407" | sudo tee /etc/hostname
	I1008 22:53:29.342965  177660 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110407
	
	I1008 22:53:29.343055  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:29.360913  177660 main.go:141] libmachine: Using SSH client type: native
	I1008 22:53:29.361226  177660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1008 22:53:29.361251  177660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-110407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-110407/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-110407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:53:29.505780  177660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:53:29.505808  177660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:53:29.505839  177660 ubuntu.go:190] setting up certificates
	I1008 22:53:29.505849  177660 provision.go:84] configureAuth start
	I1008 22:53:29.505914  177660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:53:29.523782  177660 provision.go:143] copyHostCerts
	I1008 22:53:29.523848  177660 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:53:29.523862  177660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:53:29.523938  177660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:53:29.524037  177660 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:53:29.524045  177660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:53:29.524073  177660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:53:29.524135  177660 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:53:29.524143  177660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:53:29.524167  177660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:53:29.524225  177660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-110407 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-110407]
	I1008 22:53:30.167387  177660 provision.go:177] copyRemoteCerts
	I1008 22:53:30.167459  177660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:53:30.167499  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:30.184833  177660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:53:30.285932  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:53:30.304297  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 22:53:30.321702  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:53:30.339187  177660 provision.go:87] duration metric: took 833.314453ms to configureAuth
	I1008 22:53:30.339215  177660 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:53:30.339407  177660 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:53:30.339523  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:30.356461  177660 main.go:141] libmachine: Using SSH client type: native
	I1008 22:53:30.356766  177660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33051 <nil> <nil>}
	I1008 22:53:30.356788  177660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:53:30.608526  177660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:53:30.608547  177660 machine.go:96] duration metric: took 4.606741465s to provisionDockerMachine
	I1008 22:53:30.608557  177660 client.go:171] duration metric: took 11.608025148s to LocalClient.Create
	I1008 22:53:30.608575  177660 start.go:167] duration metric: took 11.608099348s to libmachine.API.Create "old-k8s-version-110407"
	I1008 22:53:30.608583  177660 start.go:293] postStartSetup for "old-k8s-version-110407" (driver="docker")
	I1008 22:53:30.608592  177660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:53:30.608669  177660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:53:30.608710  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:30.627061  177660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:53:30.729379  177660 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:53:30.732531  177660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:53:30.732561  177660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:53:30.732572  177660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:53:30.732627  177660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:53:30.732715  177660 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:53:30.732822  177660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:53:30.739999  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:53:30.758808  177660 start.go:296] duration metric: took 150.211493ms for postStartSetup
	I1008 22:53:30.759187  177660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:53:30.775244  177660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/config.json ...
	I1008 22:53:30.775523  177660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:53:30.775573  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:30.791976  177660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:53:30.890667  177660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:53:30.895355  177660 start.go:128] duration metric: took 11.898533549s to createHost
	I1008 22:53:30.895377  177660 start.go:83] releasing machines lock for "old-k8s-version-110407", held for 11.898670364s
	I1008 22:53:30.895456  177660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:53:30.912887  177660 ssh_runner.go:195] Run: cat /version.json
	I1008 22:53:30.912948  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:30.913192  177660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:53:30.913252  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:53:30.930838  177660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:53:30.937584  177660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:53:31.033621  177660 ssh_runner.go:195] Run: systemctl --version
	I1008 22:53:31.129754  177660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:53:31.166314  177660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:53:31.170602  177660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:53:31.170680  177660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:53:31.199566  177660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:53:31.199639  177660 start.go:495] detecting cgroup driver to use...
	I1008 22:53:31.199687  177660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:53:31.199767  177660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:53:31.217366  177660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:53:31.229793  177660 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:53:31.229914  177660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:53:31.245960  177660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:53:31.265063  177660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:53:31.383350  177660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:53:31.514572  177660 docker.go:234] disabling docker service ...
	I1008 22:53:31.514690  177660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:53:31.536902  177660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:53:31.551315  177660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:53:31.694529  177660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:53:31.817957  177660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:53:31.833111  177660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:53:31.849466  177660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1008 22:53:31.849676  177660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:53:31.859431  177660 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:53:31.859573  177660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:53:31.869095  177660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:53:31.878327  177660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:53:31.887451  177660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:53:31.896041  177660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:53:31.905391  177660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:53:31.919375  177660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:53:31.928690  177660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:53:31.936665  177660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:53:31.944323  177660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:53:32.063127  177660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:53:32.198437  177660 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:53:32.198550  177660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:53:32.202346  177660 start.go:563] Will wait 60s for crictl version
	I1008 22:53:32.202455  177660 ssh_runner.go:195] Run: which crictl
	I1008 22:53:32.206172  177660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:53:32.229574  177660 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:53:32.229771  177660 ssh_runner.go:195] Run: crio --version
	I1008 22:53:32.263534  177660 ssh_runner.go:195] Run: crio --version
	I1008 22:53:32.296205  177660 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1008 22:53:32.298997  177660 cli_runner.go:164] Run: docker network inspect old-k8s-version-110407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:53:32.315331  177660 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:53:32.319006  177660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:53:32.328410  177660 kubeadm.go:883] updating cluster {Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:53:32.328536  177660 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:53:32.328607  177660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:53:32.363384  177660 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:53:32.363408  177660 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:53:32.363466  177660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:53:32.388761  177660 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:53:32.388786  177660 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:53:32.388795  177660 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1008 22:53:32.388889  177660 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-110407 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:53:32.388975  177660 ssh_runner.go:195] Run: crio config
	I1008 22:53:32.458298  177660 cni.go:84] Creating CNI manager for ""
	I1008 22:53:32.458327  177660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:53:32.458347  177660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:53:32.458371  177660 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-110407 NodeName:old-k8s-version-110407 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:53:32.458515  177660 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-110407"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:53:32.458592  177660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1008 22:53:32.468090  177660 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:53:32.468176  177660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:53:32.477168  177660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1008 22:53:32.490597  177660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:53:32.504684  177660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1008 22:53:32.518436  177660 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:53:32.522231  177660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:53:32.532284  177660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:53:32.648488  177660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:53:32.664277  177660 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407 for IP: 192.168.85.2
	I1008 22:53:32.664301  177660 certs.go:195] generating shared ca certs ...
	I1008 22:53:32.664318  177660 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:32.664446  177660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:53:32.664494  177660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:53:32.664506  177660 certs.go:257] generating profile certs ...
	I1008 22:53:32.664562  177660 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.key
	I1008 22:53:32.664578  177660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt with IP's: []
	I1008 22:53:32.844689  177660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt ...
	I1008 22:53:32.844719  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: {Name:mkcd31b3abc1ee53d0a2dcbf39c9b403f4e5b1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:32.844952  177660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.key ...
	I1008 22:53:32.844968  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.key: {Name:mkf59cdb680c9d52d8c9e8750a2011f5c2ae0b5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:32.845064  177660 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key.5d0843e3
	I1008 22:53:32.845085  177660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt.5d0843e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1008 22:53:33.339585  177660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt.5d0843e3 ...
	I1008 22:53:33.339617  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt.5d0843e3: {Name:mkc3161a85c41741e307a694f9bbe31862ddc9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:33.339801  177660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key.5d0843e3 ...
	I1008 22:53:33.339817  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key.5d0843e3: {Name:mk6c0c9d837ed1fa0fde0be55b29cb31ea542fad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:33.339894  177660 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt.5d0843e3 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt
	I1008 22:53:33.339972  177660 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key.5d0843e3 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key
	I1008 22:53:33.340037  177660 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key
	I1008 22:53:33.340056  177660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.crt with IP's: []
	I1008 22:53:33.531710  177660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.crt ...
	I1008 22:53:33.531744  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.crt: {Name:mk72a0f901a65494f69323041c622b181e54daf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:33.531932  177660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key ...
	I1008 22:53:33.531946  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key: {Name:mk04c96aa93516820590ffe79928ee0b47070f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:53:33.532131  177660 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:53:33.532186  177660 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:53:33.532201  177660 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:53:33.532226  177660 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:53:33.532254  177660 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:53:33.532279  177660 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:53:33.532325  177660 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:53:33.532876  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:53:33.553091  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:53:33.572996  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:53:33.591909  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:53:33.609696  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 22:53:33.627684  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:53:33.646480  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:53:33.664497  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:53:33.683165  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:53:33.702115  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:53:33.720334  177660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:53:33.738555  177660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:53:33.752196  177660 ssh_runner.go:195] Run: openssl version
	I1008 22:53:33.759093  177660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:53:33.767661  177660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:53:33.771501  177660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:53:33.771615  177660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:53:33.813030  177660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:53:33.821576  177660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:53:33.829660  177660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:53:33.833456  177660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:53:33.833523  177660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:53:33.874952  177660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:53:33.883650  177660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:53:33.892077  177660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:53:33.896211  177660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:53:33.896305  177660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:53:33.939939  177660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:53:33.948627  177660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:53:33.952266  177660 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:53:33.952319  177660 kubeadm.go:400] StartCluster: {Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:53:33.952390  177660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:53:33.952446  177660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:53:33.988219  177660 cri.go:89] found id: ""
	I1008 22:53:33.988294  177660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:53:33.996619  177660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:53:34.006173  177660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:53:34.006252  177660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:53:34.015337  177660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:53:34.015362  177660 kubeadm.go:157] found existing configuration files:
	
	I1008 22:53:34.015431  177660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:53:34.023780  177660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:53:34.023846  177660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:53:34.031790  177660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:53:34.039309  177660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:53:34.039391  177660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:53:34.046902  177660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:53:34.054735  177660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:53:34.054799  177660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:53:34.072320  177660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:53:34.080725  177660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:53:34.080797  177660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:53:34.089263  177660 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:53:34.198219  177660 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:53:34.280672  177660 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:53:49.188030  177660 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1008 22:53:49.188086  177660 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:53:49.188184  177660 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:53:49.188242  177660 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:53:49.188277  177660 kubeadm.go:318] OS: Linux
	I1008 22:53:49.188324  177660 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:53:49.188375  177660 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:53:49.188426  177660 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:53:49.188488  177660 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:53:49.188539  177660 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:53:49.188589  177660 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:53:49.188637  177660 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:53:49.188687  177660 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:53:49.188735  177660 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:53:49.188817  177660 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:53:49.188917  177660 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:53:49.189018  177660 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 22:53:49.189084  177660 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:53:49.192282  177660 out.go:252]   - Generating certificates and keys ...
	I1008 22:53:49.192388  177660 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:53:49.192455  177660 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:53:49.192527  177660 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:53:49.192586  177660 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:53:49.192650  177660 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:53:49.192713  177660 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:53:49.192771  177660 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:53:49.192907  177660 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-110407] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:53:49.192963  177660 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:53:49.193092  177660 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-110407] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:53:49.193161  177660 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:53:49.193227  177660 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:53:49.193279  177660 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:53:49.193339  177660 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:53:49.193392  177660 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:53:49.193448  177660 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:53:49.193515  177660 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:53:49.193582  177660 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:53:49.193695  177660 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:53:49.193765  177660 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:53:49.196892  177660 out.go:252]   - Booting up control plane ...
	I1008 22:53:49.197080  177660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:53:49.197212  177660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:53:49.197332  177660 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:53:49.197481  177660 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:53:49.197622  177660 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:53:49.197700  177660 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:53:49.197875  177660 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 22:53:49.197961  177660 kubeadm.go:318] [apiclient] All control plane components are healthy after 7.003531 seconds
	I1008 22:53:49.198079  177660 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 22:53:49.198236  177660 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 22:53:49.198301  177660 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 22:53:49.198513  177660 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-110407 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 22:53:49.198575  177660 kubeadm.go:318] [bootstrap-token] Using token: b6qjpo.cxq78cp83ttrl3wv
	I1008 22:53:49.201408  177660 out.go:252]   - Configuring RBAC rules ...
	I1008 22:53:49.201612  177660 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 22:53:49.201905  177660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 22:53:49.202065  177660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 22:53:49.202200  177660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 22:53:49.202325  177660 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 22:53:49.202424  177660 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 22:53:49.202545  177660 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 22:53:49.202590  177660 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 22:53:49.202638  177660 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 22:53:49.202642  177660 kubeadm.go:318] 
	I1008 22:53:49.202705  177660 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 22:53:49.202709  177660 kubeadm.go:318] 
	I1008 22:53:49.202789  177660 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 22:53:49.202794  177660 kubeadm.go:318] 
	I1008 22:53:49.202820  177660 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 22:53:49.202882  177660 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 22:53:49.202935  177660 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 22:53:49.202939  177660 kubeadm.go:318] 
	I1008 22:53:49.202995  177660 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 22:53:49.202999  177660 kubeadm.go:318] 
	I1008 22:53:49.203053  177660 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 22:53:49.203058  177660 kubeadm.go:318] 
	I1008 22:53:49.203113  177660 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 22:53:49.203191  177660 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 22:53:49.203262  177660 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 22:53:49.203266  177660 kubeadm.go:318] 
	I1008 22:53:49.203354  177660 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 22:53:49.203434  177660 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 22:53:49.203438  177660 kubeadm.go:318] 
	I1008 22:53:49.203526  177660 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token b6qjpo.cxq78cp83ttrl3wv \
	I1008 22:53:49.203635  177660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 22:53:49.203656  177660 kubeadm.go:318] 	--control-plane 
	I1008 22:53:49.203660  177660 kubeadm.go:318] 
	I1008 22:53:49.203749  177660 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 22:53:49.203754  177660 kubeadm.go:318] 
	I1008 22:53:49.203840  177660 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token b6qjpo.cxq78cp83ttrl3wv \
	I1008 22:53:49.203958  177660 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 22:53:49.203965  177660 cni.go:84] Creating CNI manager for ""
	I1008 22:53:49.203972  177660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:53:49.207014  177660 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 22:53:49.209910  177660 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 22:53:49.214297  177660 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1008 22:53:49.214359  177660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 22:53:49.237990  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 22:53:50.226747  177660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 22:53:50.226818  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:50.226884  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-110407 minikube.k8s.io/updated_at=2025_10_08T22_53_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=old-k8s-version-110407 minikube.k8s.io/primary=true
	I1008 22:53:50.429332  177660 ops.go:34] apiserver oom_adj: -16
	I1008 22:53:50.429431  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:50.930046  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:51.429664  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:51.930296  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:52.430074  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:52.930152  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:53.429660  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:53.930142  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:54.430267  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:54.930021  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:55.429610  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:55.930238  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:56.429585  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:56.929762  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:57.429796  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:57.929667  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:58.430397  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:58.930140  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:59.430529  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:53:59.930487  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:54:00.430475  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:54:00.929577  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:54:01.430142  177660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:54:01.531023  177660 kubeadm.go:1113] duration metric: took 11.304265007s to wait for elevateKubeSystemPrivileges
	I1008 22:54:01.531057  177660 kubeadm.go:402] duration metric: took 27.578741027s to StartCluster
	I1008 22:54:01.531075  177660 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:01.531147  177660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:54:01.531851  177660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:01.532067  177660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:54:01.532178  177660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 22:54:01.532423  177660 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:54:01.532475  177660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:54:01.532535  177660 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-110407"
	I1008 22:54:01.532557  177660 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-110407"
	I1008 22:54:01.532578  177660 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:01.532606  177660 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-110407"
	I1008 22:54:01.532659  177660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-110407"
	I1008 22:54:01.533027  177660 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:01.533120  177660 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:01.540202  177660 out.go:179] * Verifying Kubernetes components...
	I1008 22:54:01.543726  177660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:54:01.579448  177660 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-110407"
	I1008 22:54:01.579543  177660 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:01.580094  177660 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:01.583958  177660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:54:01.593735  177660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:54:01.593760  177660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:54:01.593847  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:01.604041  177660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:54:01.604063  177660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:54:01.604127  177660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:01.635594  177660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:01.643302  177660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33051 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:01.850682  177660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 22:54:01.856421  177660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:54:01.869169  177660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:54:01.875330  177660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:54:02.698990  177660 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1008 22:54:03.191513  177660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.335001724s)
	I1008 22:54:03.191557  177660 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.322317622s)
	I1008 22:54:03.192299  177660 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-110407" to be "Ready" ...
	I1008 22:54:03.192534  177660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.317127588s)
	I1008 22:54:03.203039  177660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-110407" context rescaled to 1 replicas
	I1008 22:54:03.206628  177660 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 22:54:03.210147  177660 addons.go:514] duration metric: took 1.677655251s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1008 22:54:05.195872  177660 node_ready.go:57] node "old-k8s-version-110407" has "Ready":"False" status (will retry)
	W1008 22:54:07.196048  177660 node_ready.go:57] node "old-k8s-version-110407" has "Ready":"False" status (will retry)
	I1008 22:54:11.241650  171796 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001021269s
	I1008 22:54:11.242003  171796 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001183494s
	I1008 22:54:11.242429  171796 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001554534s
	I1008 22:54:11.242483  171796 kubeadm.go:318] 
	I1008 22:54:11.242576  171796 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:54:11.242689  171796 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:54:11.242785  171796 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:54:11.242913  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:54:11.242993  171796 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:54:11.243075  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:54:11.243083  171796 kubeadm.go:318] 
	I1008 22:54:11.248122  171796 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:54:11.248367  171796 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:54:11.248483  171796 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:54:11.249088  171796 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1008 22:54:11.249165  171796 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1008 22:54:11.249321  171796 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-385382 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000927309s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001021269s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001183494s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001554534s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1008 22:54:11.249403  171796 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1008 22:54:11.799431  171796 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:54:11.813202  171796 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:54:11.813258  171796 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:54:11.821436  171796 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:54:11.821456  171796 kubeadm.go:157] found existing configuration files:
	
	I1008 22:54:11.821507  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:54:11.829391  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:54:11.829500  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:54:11.836959  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:54:11.845137  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:54:11.845208  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:54:11.852823  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:54:11.861211  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:54:11.861296  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:54:11.868785  171796 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:54:11.876745  171796 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:54:11.876856  171796 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:54:11.884940  171796 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:54:11.924932  171796 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:54:11.925194  171796 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:54:11.951034  171796 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:54:11.951108  171796 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:54:11.951145  171796 kubeadm.go:318] OS: Linux
	I1008 22:54:11.951193  171796 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:54:11.951243  171796 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:54:11.951293  171796 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:54:11.951343  171796 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:54:11.951394  171796 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:54:11.951449  171796 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:54:11.951497  171796 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:54:11.951548  171796 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:54:11.951596  171796 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:54:12.036161  171796 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:54:12.036270  171796 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:54:12.036361  171796 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:54:12.054076  171796 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:54:12.061736  171796 out.go:252]   - Generating certificates and keys ...
	I1008 22:54:12.061833  171796 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:54:12.061898  171796 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:54:12.061975  171796 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 22:54:12.062036  171796 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1008 22:54:12.062106  171796 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 22:54:12.062160  171796 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1008 22:54:12.062224  171796 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1008 22:54:12.062285  171796 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1008 22:54:12.062359  171796 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 22:54:12.062432  171796 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 22:54:12.062470  171796 kubeadm.go:318] [certs] Using the existing "sa" key
	I1008 22:54:12.062534  171796 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:54:12.625613  171796 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:54:12.866049  171796 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:54:13.055455  171796 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	W1008 22:54:09.695224  177660 node_ready.go:57] node "old-k8s-version-110407" has "Ready":"False" status (will retry)
	W1008 22:54:11.696925  177660 node_ready.go:57] node "old-k8s-version-110407" has "Ready":"False" status (will retry)
	I1008 22:54:14.357749  171796 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:54:14.949936  171796 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:54:14.950545  171796 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:54:14.953816  171796 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1008 22:54:14.195799  177660 node_ready.go:57] node "old-k8s-version-110407" has "Ready":"False" status (will retry)
	I1008 22:54:16.201712  177660 node_ready.go:49] node "old-k8s-version-110407" is "Ready"
	I1008 22:54:16.201741  177660 node_ready.go:38] duration metric: took 13.009423924s for node "old-k8s-version-110407" to be "Ready" ...
	I1008 22:54:16.201754  177660 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:54:16.201810  177660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:54:16.240880  177660 api_server.go:72] duration metric: took 14.708780473s to wait for apiserver process to appear ...
	I1008 22:54:16.240908  177660 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:54:16.240930  177660 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:54:16.271593  177660 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 22:54:16.277189  177660 api_server.go:141] control plane version: v1.28.0
	I1008 22:54:16.277219  177660 api_server.go:131] duration metric: took 36.303315ms to wait for apiserver health ...
	I1008 22:54:16.277228  177660 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:54:16.296870  177660 system_pods.go:59] 8 kube-system pods found
	I1008 22:54:16.296911  177660 system_pods.go:61] "coredns-5dd5756b68-p9wsf" [94a25734-c268-4a26-8995-467082f156ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:54:16.296919  177660 system_pods.go:61] "etcd-old-k8s-version-110407" [9341d2d7-8457-4042-953a-042454abf107] Running
	I1008 22:54:16.296926  177660 system_pods.go:61] "kindnet-dzbkd" [293adcb3-a304-42a9-8533-ef23cf040ea6] Running
	I1008 22:54:16.296930  177660 system_pods.go:61] "kube-apiserver-old-k8s-version-110407" [ecdf237a-8269-4ebc-a83b-0f08d6f8157f] Running
	I1008 22:54:16.296935  177660 system_pods.go:61] "kube-controller-manager-old-k8s-version-110407" [8f6a76d5-b9f0-494e-91b8-f3800acb243c] Running
	I1008 22:54:16.296944  177660 system_pods.go:61] "kube-proxy-gsbl4" [cccf2800-b3c8-4684-bc54-d88b59e04bb6] Running
	I1008 22:54:16.296948  177660 system_pods.go:61] "kube-scheduler-old-k8s-version-110407" [695e473d-ed17-4a6f-ada7-b54cde1e5ddc] Running
	I1008 22:54:16.296955  177660 system_pods.go:61] "storage-provisioner" [6105db1d-9197-46c6-8ae0-49fe2291d679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:54:16.296960  177660 system_pods.go:74] duration metric: took 19.726851ms to wait for pod list to return data ...
	I1008 22:54:16.296968  177660 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:54:16.300425  177660 default_sa.go:45] found service account: "default"
	I1008 22:54:16.300447  177660 default_sa.go:55] duration metric: took 3.474001ms for default service account to be created ...
	I1008 22:54:16.300457  177660 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:54:16.303993  177660 system_pods.go:86] 8 kube-system pods found
	I1008 22:54:16.304075  177660 system_pods.go:89] "coredns-5dd5756b68-p9wsf" [94a25734-c268-4a26-8995-467082f156ae] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:54:16.304097  177660 system_pods.go:89] "etcd-old-k8s-version-110407" [9341d2d7-8457-4042-953a-042454abf107] Running
	I1008 22:54:16.304121  177660 system_pods.go:89] "kindnet-dzbkd" [293adcb3-a304-42a9-8533-ef23cf040ea6] Running
	I1008 22:54:16.304153  177660 system_pods.go:89] "kube-apiserver-old-k8s-version-110407" [ecdf237a-8269-4ebc-a83b-0f08d6f8157f] Running
	I1008 22:54:16.304176  177660 system_pods.go:89] "kube-controller-manager-old-k8s-version-110407" [8f6a76d5-b9f0-494e-91b8-f3800acb243c] Running
	I1008 22:54:16.304198  177660 system_pods.go:89] "kube-proxy-gsbl4" [cccf2800-b3c8-4684-bc54-d88b59e04bb6] Running
	I1008 22:54:16.304234  177660 system_pods.go:89] "kube-scheduler-old-k8s-version-110407" [695e473d-ed17-4a6f-ada7-b54cde1e5ddc] Running
	I1008 22:54:16.304263  177660 system_pods.go:89] "storage-provisioner" [6105db1d-9197-46c6-8ae0-49fe2291d679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:54:16.304318  177660 retry.go:31] will retry after 198.603941ms: missing components: kube-dns
	I1008 22:54:16.508153  177660 system_pods.go:86] 8 kube-system pods found
	I1008 22:54:16.508241  177660 system_pods.go:89] "coredns-5dd5756b68-p9wsf" [94a25734-c268-4a26-8995-467082f156ae] Running
	I1008 22:54:16.508264  177660 system_pods.go:89] "etcd-old-k8s-version-110407" [9341d2d7-8457-4042-953a-042454abf107] Running
	I1008 22:54:16.508310  177660 system_pods.go:89] "kindnet-dzbkd" [293adcb3-a304-42a9-8533-ef23cf040ea6] Running
	I1008 22:54:16.508337  177660 system_pods.go:89] "kube-apiserver-old-k8s-version-110407" [ecdf237a-8269-4ebc-a83b-0f08d6f8157f] Running
	I1008 22:54:16.508359  177660 system_pods.go:89] "kube-controller-manager-old-k8s-version-110407" [8f6a76d5-b9f0-494e-91b8-f3800acb243c] Running
	I1008 22:54:16.508394  177660 system_pods.go:89] "kube-proxy-gsbl4" [cccf2800-b3c8-4684-bc54-d88b59e04bb6] Running
	I1008 22:54:16.508420  177660 system_pods.go:89] "kube-scheduler-old-k8s-version-110407" [695e473d-ed17-4a6f-ada7-b54cde1e5ddc] Running
	I1008 22:54:16.508439  177660 system_pods.go:89] "storage-provisioner" [6105db1d-9197-46c6-8ae0-49fe2291d679] Running
	I1008 22:54:16.508479  177660 system_pods.go:126] duration metric: took 208.01449ms to wait for k8s-apps to be running ...
	I1008 22:54:16.508506  177660 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:54:16.508593  177660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:54:16.523991  177660 system_svc.go:56] duration metric: took 15.47611ms WaitForService to wait for kubelet
	I1008 22:54:16.524072  177660 kubeadm.go:586] duration metric: took 14.991976112s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:54:16.524108  177660 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:54:16.535127  177660 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:54:16.535167  177660 node_conditions.go:123] node cpu capacity is 2
	I1008 22:54:16.535182  177660 node_conditions.go:105] duration metric: took 11.068133ms to run NodePressure ...
	I1008 22:54:16.535195  177660 start.go:241] waiting for startup goroutines ...
	I1008 22:54:16.535202  177660 start.go:246] waiting for cluster config update ...
	I1008 22:54:16.535213  177660 start.go:255] writing updated cluster config ...
	I1008 22:54:16.535498  177660 ssh_runner.go:195] Run: rm -f paused
	I1008 22:54:16.541103  177660 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:54:16.546888  177660 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-p9wsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:16.555389  177660 pod_ready.go:94] pod "coredns-5dd5756b68-p9wsf" is "Ready"
	I1008 22:54:16.555414  177660 pod_ready.go:86] duration metric: took 8.500398ms for pod "coredns-5dd5756b68-p9wsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:16.562439  177660 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:16.567794  177660 pod_ready.go:94] pod "etcd-old-k8s-version-110407" is "Ready"
	I1008 22:54:16.567821  177660 pod_ready.go:86] duration metric: took 5.356042ms for pod "etcd-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:16.571592  177660 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:16.577451  177660 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-110407" is "Ready"
	I1008 22:54:16.577480  177660 pod_ready.go:86] duration metric: took 5.854328ms for pod "kube-apiserver-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:16.580896  177660 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:16.945927  177660 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-110407" is "Ready"
	I1008 22:54:16.945955  177660 pod_ready.go:86] duration metric: took 365.032349ms for pod "kube-controller-manager-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:17.145840  177660 pod_ready.go:83] waiting for pod "kube-proxy-gsbl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:17.545389  177660 pod_ready.go:94] pod "kube-proxy-gsbl4" is "Ready"
	I1008 22:54:17.545416  177660 pod_ready.go:86] duration metric: took 399.551578ms for pod "kube-proxy-gsbl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:17.746481  177660 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:18.145489  177660 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-110407" is "Ready"
	I1008 22:54:18.145578  177660 pod_ready.go:86] duration metric: took 399.069972ms for pod "kube-scheduler-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:54:18.145608  177660 pod_ready.go:40] duration metric: took 1.604463261s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:54:18.203025  177660 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1008 22:54:18.206166  177660 out.go:203] 
	W1008 22:54:18.209068  177660 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1008 22:54:18.212153  177660 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1008 22:54:18.216122  177660 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-110407" cluster and "default" namespace by default
	I1008 22:54:14.957439  171796 out.go:252]   - Booting up control plane ...
	I1008 22:54:14.957577  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:54:14.957677  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:54:14.958834  171796 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:54:14.977498  171796 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:54:14.978141  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:54:14.986378  171796 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:54:14.986689  171796 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:54:14.986746  171796 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:54:15.155337  171796 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:54:15.155469  171796 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:54:16.160644  171796 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.005376143s
	I1008 22:54:16.165475  171796 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:54:16.165584  171796 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1008 22:54:16.165708  171796 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:54:16.165790  171796 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 08 22:54:16 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:16.167154251Z" level=info msg="Starting container: 597d82fd3c3b926f7fc8adc9b55e1331b392d8469d9e50780bbc6e6071f51b30" id=aa462012-c0e7-436d-a9ad-de429ad843b8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:54:16 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:16.16852047Z" level=info msg="Started container" PID=1916 containerID=156e8a4b2047c7b4e17934a1b5d91d85d5f7221cb61cd35e90ba0343fa25d478 description=kube-system/storage-provisioner/storage-provisioner id=d6e2b1db-1cc3-4c4c-a40b-f07f89c574a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72291ce7e977842f67b9beeee1eff46b2bab5872b4743b9436adc474b82c40f8
	Oct 08 22:54:16 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:16.17855824Z" level=info msg="Started container" PID=1917 containerID=597d82fd3c3b926f7fc8adc9b55e1331b392d8469d9e50780bbc6e6071f51b30 description=kube-system/coredns-5dd5756b68-p9wsf/coredns id=aa462012-c0e7-436d-a9ad-de429ad843b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88e3eb9f3c474efa46bbb69e620ff4455f494fb7bc5ff99ffcdb876ee048aaf0
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.749713042Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4bb7deb7-a50d-47b5-987c-4723ac7b93a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.749796769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.756156498Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:99ae9de7567eec20da6c8dfee427c95cff7312a6c1f95f634f6328cc41614341 UID:b76316e1-0819-46ee-90c6-eb3ec4a3f531 NetNS:/var/run/netns/31f1d843-f324-4b87-addc-63df5549914d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012e00b0}] Aliases:map[]}"
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.756373838Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.767374819Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:99ae9de7567eec20da6c8dfee427c95cff7312a6c1f95f634f6328cc41614341 UID:b76316e1-0819-46ee-90c6-eb3ec4a3f531 NetNS:/var/run/netns/31f1d843-f324-4b87-addc-63df5549914d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012e00b0}] Aliases:map[]}"
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.767524958Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.774042302Z" level=info msg="Ran pod sandbox 99ae9de7567eec20da6c8dfee427c95cff7312a6c1f95f634f6328cc41614341 with infra container: default/busybox/POD" id=4bb7deb7-a50d-47b5-987c-4723ac7b93a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.775385161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=057f3c83-c6c7-4fec-b746-f9c884d6a288 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.775521433Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=057f3c83-c6c7-4fec-b746-f9c884d6a288 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.775559456Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=057f3c83-c6c7-4fec-b746-f9c884d6a288 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.776298222Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d53bb4f3-dbf4-4f01-9f89-cb8886166ddf name=/runtime.v1.ImageService/PullImage
	Oct 08 22:54:18 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:18.779927737Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.791710847Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d53bb4f3-dbf4-4f01-9f89-cb8886166ddf name=/runtime.v1.ImageService/PullImage
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.792525511Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=35c205f8-a883-456d-8229-386239eeaf08 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.794480102Z" level=info msg="Creating container: default/busybox/busybox" id=ce259987-3966-4aa7-9c90-bf48bb19ef41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.795573998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.802364583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.802914727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.820856015Z" level=info msg="Created container 38042c0c70b0820c5ec60a5878f5efc9d272e609b14a43a8860f6feed0f41ed2: default/busybox/busybox" id=ce259987-3966-4aa7-9c90-bf48bb19ef41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.824948206Z" level=info msg="Starting container: 38042c0c70b0820c5ec60a5878f5efc9d272e609b14a43a8860f6feed0f41ed2" id=05a75c98-b22d-44f3-a167-00e7ce36d129 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:54:20 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:20.827163723Z" level=info msg="Started container" PID=1978 containerID=38042c0c70b0820c5ec60a5878f5efc9d272e609b14a43a8860f6feed0f41ed2 description=default/busybox/busybox id=05a75c98-b22d-44f3-a167-00e7ce36d129 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99ae9de7567eec20da6c8dfee427c95cff7312a6c1f95f634f6328cc41614341
	Oct 08 22:54:28 old-k8s-version-110407 crio[835]: time="2025-10-08T22:54:28.621515241Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	38042c0c70b08       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   99ae9de7567ee       busybox                                          default
	597d82fd3c3b9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   88e3eb9f3c474       coredns-5dd5756b68-p9wsf                         kube-system
	156e8a4b2047c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   72291ce7e9778       storage-provisioner                              kube-system
	0210611f02f92       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   41b44375827fb       kindnet-dzbkd                                    kube-system
	7dc92222be4c9       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   0c54f9ecfafb3       kube-proxy-gsbl4                                 kube-system
	fc2435a0c77a3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   668a322031a7e       kube-apiserver-old-k8s-version-110407            kube-system
	623c0a88e5823       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   274f44add2ab4       kube-controller-manager-old-k8s-version-110407   kube-system
	ea0bd577dc764       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   9bd11d0bb0b16       kube-scheduler-old-k8s-version-110407            kube-system
	330cfdfb0c63b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   e8879d786cad6       etcd-old-k8s-version-110407                      kube-system
	
	
	==> coredns [597d82fd3c3b926f7fc8adc9b55e1331b392d8469d9e50780bbc6e6071f51b30] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40712 - 31476 "HINFO IN 1390431978362876094.561514471266780666. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013922861s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-110407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-110407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=old-k8s-version-110407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_53_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:53:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-110407
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:54:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:54:20 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:54:20 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:54:20 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:54:20 +0000   Wed, 08 Oct 2025 22:54:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-110407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1a9a2148c8b428895f1d83e1054cd9e
	  System UUID:                8dba2821-4735-44a2-98ca-98cb78fcdea2
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-p9wsf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-110407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-dzbkd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-110407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-110407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-gsbl4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-110407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-110407 event: Registered Node old-k8s-version-110407 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-110407 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 22:21] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:22] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:27] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [330cfdfb0c63ba9dba8db8458b24878ff33052c0231f4a74e4daba9d9824060f] <==
	{"level":"info","ts":"2025-10-08T22:53:42.187725Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-08T22:53:42.179284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-08T22:53:42.188042Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-08T22:53:42.179325Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-08T22:53:42.188224Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-08T22:53:42.188477Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-08T22:53:42.190055Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-08T22:53:42.833678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-08T22:53:42.833794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-08T22:53:42.833844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-08T22:53:42.833892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-08T22:53:42.833931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-08T22:53:42.833974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-08T22:53:42.834009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-08T22:53:42.83776Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:53:42.841868Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-110407 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-08T22:53:42.841949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T22:53:42.842697Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:53:42.842817Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:53:42.84287Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:53:42.843479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-08T22:53:42.845668Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T22:53:42.854659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-08T22:53:42.850415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-08T22:53:42.85777Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:54:30 up  1:37,  0 user,  load average: 1.03, 1.30, 1.68
	Linux old-k8s-version-110407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0210611f02f923947774add3f1d82fdae21d0b9f2eddc3d56eb876d0d1d8cfdb] <==
	I1008 22:54:05.008542       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:54:05.008980       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 22:54:05.010446       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:54:05.010549       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:54:05.010587       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:54:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:54:05.208681       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:54:05.208765       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:54:05.208799       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:54:05.209592       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1008 22:54:05.408890       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:54:05.408924       1 metrics.go:72] Registering metrics
	I1008 22:54:05.408992       1 controller.go:711] "Syncing nftables rules"
	I1008 22:54:15.209706       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:54:15.209768       1 main.go:301] handling current node
	I1008 22:54:25.209187       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:54:25.209221       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc2435a0c77a3a5dfd801b63df2a41089bd2c921ebe547d47bc50c65b01e31a1] <==
	I1008 22:53:45.802144       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 22:53:45.802150       1 cache.go:39] Caches are synced for autoregister controller
	I1008 22:53:45.860602       1 controller.go:624] quota admission added evaluator for: namespaces
	I1008 22:53:45.877753       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1008 22:53:45.877874       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1008 22:53:45.877907       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1008 22:53:45.879346       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 22:53:45.888076       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1008 22:53:45.909118       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1008 22:53:45.939022       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:53:46.510098       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1008 22:53:46.516988       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1008 22:53:46.517082       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:53:47.177522       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:53:47.226967       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:53:47.346160       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 22:53:47.353707       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1008 22:53:47.355199       1 controller.go:624] quota admission added evaluator for: endpoints
	I1008 22:53:47.360806       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:53:47.703622       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1008 22:53:49.069465       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1008 22:53:49.083932       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 22:53:49.102967       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1008 22:54:01.079922       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1008 22:54:01.230832       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [623c0a88e582388f51376c41e500a2e80dd469a28789107227303f8940ba2919] <==
	I1008 22:54:00.678466       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1008 22:54:00.698477       1 shared_informer.go:318] Caches are synced for resource quota
	I1008 22:54:00.746510       1 shared_informer.go:318] Caches are synced for resource quota
	I1008 22:54:01.097258       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dzbkd"
	I1008 22:54:01.103613       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gsbl4"
	I1008 22:54:01.169892       1 shared_informer.go:318] Caches are synced for garbage collector
	I1008 22:54:01.173417       1 shared_informer.go:318] Caches are synced for garbage collector
	I1008 22:54:01.173454       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1008 22:54:01.236326       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1008 22:54:01.654199       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-9bkql"
	I1008 22:54:01.670304       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-p9wsf"
	I1008 22:54:01.701096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="464.528832ms"
	I1008 22:54:01.755476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.317112ms"
	I1008 22:54:01.821095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.568007ms"
	I1008 22:54:01.821314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.683µs"
	I1008 22:54:02.738580       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1008 22:54:02.772862       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-9bkql"
	I1008 22:54:02.788159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.534282ms"
	I1008 22:54:02.797802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.583717ms"
	I1008 22:54:02.797909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.673µs"
	I1008 22:54:15.746804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.772µs"
	I1008 22:54:15.776771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.05µs"
	I1008 22:54:16.512296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.226267ms"
	I1008 22:54:16.512486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.554µs"
	I1008 22:54:20.634858       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [7dc92222be4c9fe263b16c2363c8a97940eb4656d1101b125dca89b712cca911] <==
	I1008 22:54:02.367099       1 server_others.go:69] "Using iptables proxy"
	I1008 22:54:02.383658       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1008 22:54:02.483465       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:54:02.487171       1 server_others.go:152] "Using iptables Proxier"
	I1008 22:54:02.487206       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1008 22:54:02.487214       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1008 22:54:02.487248       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1008 22:54:02.487832       1 server.go:846] "Version info" version="v1.28.0"
	I1008 22:54:02.487847       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:54:02.491906       1 config.go:188] "Starting service config controller"
	I1008 22:54:02.492045       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1008 22:54:02.492109       1 config.go:97] "Starting endpoint slice config controller"
	I1008 22:54:02.492139       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1008 22:54:02.492972       1 config.go:315] "Starting node config controller"
	I1008 22:54:02.500768       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1008 22:54:02.592221       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1008 22:54:02.592282       1 shared_informer.go:318] Caches are synced for service config
	I1008 22:54:02.608174       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ea0bd577dc764a00f5491ed73b708919a125dfc79cd9b158c5a3b1fd3be80602] <==
	W1008 22:53:46.378854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1008 22:53:46.380938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1008 22:53:46.381080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 22:53:46.381120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1008 22:53:46.381190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 22:53:46.381225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1008 22:53:46.381325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1008 22:53:46.381363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1008 22:53:46.381446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 22:53:46.381483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1008 22:53:46.381558       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 22:53:46.381595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1008 22:53:46.381682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 22:53:46.381732       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1008 22:53:46.382078       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1008 22:53:46.382392       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 22:53:46.382190       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1008 22:53:46.382489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1008 22:53:46.382254       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 22:53:46.382614       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1008 22:53:46.382306       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 22:53:46.382706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1008 22:53:46.382357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 22:53:46.382780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1008 22:53:47.960235       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: I1008 22:54:01.221132    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmpj\" (UniqueName: \"kubernetes.io/projected/293adcb3-a304-42a9-8533-ef23cf040ea6-kube-api-access-6jmpj\") pod \"kindnet-dzbkd\" (UID: \"293adcb3-a304-42a9-8533-ef23cf040ea6\") " pod="kube-system/kindnet-dzbkd"
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: I1008 22:54:01.221158    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/293adcb3-a304-42a9-8533-ef23cf040ea6-lib-modules\") pod \"kindnet-dzbkd\" (UID: \"293adcb3-a304-42a9-8533-ef23cf040ea6\") " pod="kube-system/kindnet-dzbkd"
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: E1008 22:54:01.333776    1363 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: E1008 22:54:01.333947    1363 projected.go:198] Error preparing data for projected volume kube-api-access-6jmpj for pod kube-system/kindnet-dzbkd: configmap "kube-root-ca.crt" not found
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: E1008 22:54:01.334087    1363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/293adcb3-a304-42a9-8533-ef23cf040ea6-kube-api-access-6jmpj podName:293adcb3-a304-42a9-8533-ef23cf040ea6 nodeName:}" failed. No retries permitted until 2025-10-08 22:54:01.834058519 +0000 UTC m=+12.796965458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6jmpj" (UniqueName: "kubernetes.io/projected/293adcb3-a304-42a9-8533-ef23cf040ea6-kube-api-access-6jmpj") pod "kindnet-dzbkd" (UID: "293adcb3-a304-42a9-8533-ef23cf040ea6") : configmap "kube-root-ca.crt" not found
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: E1008 22:54:01.340081    1363 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: E1008 22:54:01.340245    1363 projected.go:198] Error preparing data for projected volume kube-api-access-thmqs for pod kube-system/kube-proxy-gsbl4: configmap "kube-root-ca.crt" not found
	Oct 08 22:54:01 old-k8s-version-110407 kubelet[1363]: E1008 22:54:01.340355    1363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cccf2800-b3c8-4684-bc54-d88b59e04bb6-kube-api-access-thmqs podName:cccf2800-b3c8-4684-bc54-d88b59e04bb6 nodeName:}" failed. No retries permitted until 2025-10-08 22:54:01.840333415 +0000 UTC m=+12.803240362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-thmqs" (UniqueName: "kubernetes.io/projected/cccf2800-b3c8-4684-bc54-d88b59e04bb6-kube-api-access-thmqs") pod "kube-proxy-gsbl4" (UID: "cccf2800-b3c8-4684-bc54-d88b59e04bb6") : configmap "kube-root-ca.crt" not found
	Oct 08 22:54:02 old-k8s-version-110407 kubelet[1363]: W1008 22:54:02.027070    1363 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-41b44375827fbd84e5ded48da87f128de09a16acc8095793eea70aae823715ec WatchSource:0}: Error finding container 41b44375827fbd84e5ded48da87f128de09a16acc8095793eea70aae823715ec: Status 404 returned error can't find the container with id 41b44375827fbd84e5ded48da87f128de09a16acc8095793eea70aae823715ec
	Oct 08 22:54:02 old-k8s-version-110407 kubelet[1363]: W1008 22:54:02.050537    1363 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-0c54f9ecfafb369500fd05baf6f74bf770417aae36000621fdd33841e35f3022 WatchSource:0}: Error finding container 0c54f9ecfafb369500fd05baf6f74bf770417aae36000621fdd33841e35f3022: Status 404 returned error can't find the container with id 0c54f9ecfafb369500fd05baf6f74bf770417aae36000621fdd33841e35f3022
	Oct 08 22:54:05 old-k8s-version-110407 kubelet[1363]: I1008 22:54:05.453992    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gsbl4" podStartSLOduration=4.453853655 podCreationTimestamp="2025-10-08 22:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:54:02.458953619 +0000 UTC m=+13.421860566" watchObservedRunningTime="2025-10-08 22:54:05.453853655 +0000 UTC m=+16.416760602"
	Oct 08 22:54:09 old-k8s-version-110407 kubelet[1363]: I1008 22:54:09.340937    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-dzbkd" podStartSLOduration=5.507137613 podCreationTimestamp="2025-10-08 22:54:01 +0000 UTC" firstStartedPulling="2025-10-08 22:54:02.041040017 +0000 UTC m=+13.003946964" lastFinishedPulling="2025-10-08 22:54:04.874781082 +0000 UTC m=+15.837688021" observedRunningTime="2025-10-08 22:54:05.45524174 +0000 UTC m=+16.418148687" watchObservedRunningTime="2025-10-08 22:54:09.34087867 +0000 UTC m=+20.303785625"
	Oct 08 22:54:15 old-k8s-version-110407 kubelet[1363]: I1008 22:54:15.705620    1363 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 08 22:54:15 old-k8s-version-110407 kubelet[1363]: I1008 22:54:15.746434    1363 topology_manager.go:215] "Topology Admit Handler" podUID="94a25734-c268-4a26-8995-467082f156ae" podNamespace="kube-system" podName="coredns-5dd5756b68-p9wsf"
	Oct 08 22:54:15 old-k8s-version-110407 kubelet[1363]: I1008 22:54:15.751813    1363 topology_manager.go:215] "Topology Admit Handler" podUID="6105db1d-9197-46c6-8ae0-49fe2291d679" podNamespace="kube-system" podName="storage-provisioner"
	Oct 08 22:54:15 old-k8s-version-110407 kubelet[1363]: I1008 22:54:15.831298    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94a25734-c268-4a26-8995-467082f156ae-config-volume\") pod \"coredns-5dd5756b68-p9wsf\" (UID: \"94a25734-c268-4a26-8995-467082f156ae\") " pod="kube-system/coredns-5dd5756b68-p9wsf"
	Oct 08 22:54:15 old-k8s-version-110407 kubelet[1363]: I1008 22:54:15.831515    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6105db1d-9197-46c6-8ae0-49fe2291d679-tmp\") pod \"storage-provisioner\" (UID: \"6105db1d-9197-46c6-8ae0-49fe2291d679\") " pod="kube-system/storage-provisioner"
	Oct 08 22:54:15 old-k8s-version-110407 kubelet[1363]: I1008 22:54:15.831569    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md65k\" (UniqueName: \"kubernetes.io/projected/94a25734-c268-4a26-8995-467082f156ae-kube-api-access-md65k\") pod \"coredns-5dd5756b68-p9wsf\" (UID: \"94a25734-c268-4a26-8995-467082f156ae\") " pod="kube-system/coredns-5dd5756b68-p9wsf"
	Oct 08 22:54:15 old-k8s-version-110407 kubelet[1363]: I1008 22:54:15.831601    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fkjh\" (UniqueName: \"kubernetes.io/projected/6105db1d-9197-46c6-8ae0-49fe2291d679-kube-api-access-5fkjh\") pod \"storage-provisioner\" (UID: \"6105db1d-9197-46c6-8ae0-49fe2291d679\") " pod="kube-system/storage-provisioner"
	Oct 08 22:54:16 old-k8s-version-110407 kubelet[1363]: W1008 22:54:16.095687    1363 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-88e3eb9f3c474efa46bbb69e620ff4455f494fb7bc5ff99ffcdb876ee048aaf0 WatchSource:0}: Error finding container 88e3eb9f3c474efa46bbb69e620ff4455f494fb7bc5ff99ffcdb876ee048aaf0: Status 404 returned error can't find the container with id 88e3eb9f3c474efa46bbb69e620ff4455f494fb7bc5ff99ffcdb876ee048aaf0
	Oct 08 22:54:16 old-k8s-version-110407 kubelet[1363]: I1008 22:54:16.492094    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.492006826 podCreationTimestamp="2025-10-08 22:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:54:16.477732256 +0000 UTC m=+27.440639203" watchObservedRunningTime="2025-10-08 22:54:16.492006826 +0000 UTC m=+27.454913773"
	Oct 08 22:54:18 old-k8s-version-110407 kubelet[1363]: I1008 22:54:18.447527    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-p9wsf" podStartSLOduration=17.44747097 podCreationTimestamp="2025-10-08 22:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:54:16.494038833 +0000 UTC m=+27.456945781" watchObservedRunningTime="2025-10-08 22:54:18.44747097 +0000 UTC m=+29.410377926"
	Oct 08 22:54:18 old-k8s-version-110407 kubelet[1363]: I1008 22:54:18.447883    1363 topology_manager.go:215] "Topology Admit Handler" podUID="b76316e1-0819-46ee-90c6-eb3ec4a3f531" podNamespace="default" podName="busybox"
	Oct 08 22:54:18 old-k8s-version-110407 kubelet[1363]: I1008 22:54:18.550628    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq472\" (UniqueName: \"kubernetes.io/projected/b76316e1-0819-46ee-90c6-eb3ec4a3f531-kube-api-access-qq472\") pod \"busybox\" (UID: \"b76316e1-0819-46ee-90c6-eb3ec4a3f531\") " pod="default/busybox"
	Oct 08 22:54:18 old-k8s-version-110407 kubelet[1363]: W1008 22:54:18.771373    1363 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-99ae9de7567eec20da6c8dfee427c95cff7312a6c1f95f634f6328cc41614341 WatchSource:0}: Error finding container 99ae9de7567eec20da6c8dfee427c95cff7312a6c1f95f634f6328cc41614341: Status 404 returned error can't find the container with id 99ae9de7567eec20da6c8dfee427c95cff7312a6c1f95f634f6328cc41614341
	
	
	==> storage-provisioner [156e8a4b2047c7b4e17934a1b5d91d85d5f7221cb61cd35e90ba0343fa25d478] <==
	I1008 22:54:16.224432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:54:16.279504       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:54:16.279622       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 22:54:16.311176       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:54:16.311487       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-110407_656e7b8a-f37c-4e47-92a8-15c697aea32e!
	I1008 22:54:16.312315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f2116a6-5967-4c8b-a3c3-8076bb9f79ff", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-110407_656e7b8a-f37c-4e47-92a8-15c697aea32e became leader
	I1008 22:54:16.411656       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-110407_656e7b8a-f37c-4e47-92a8-15c697aea32e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110407 -n old-k8s-version-110407
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-110407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-110407 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-110407 --alsologtostderr -v=1: exit status 80 (1.768168386s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-110407 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:55:41.127394  183746 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:55:41.127509  183746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:55:41.127521  183746 out.go:374] Setting ErrFile to fd 2...
	I1008 22:55:41.127526  183746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:55:41.127765  183746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:55:41.128016  183746 out.go:368] Setting JSON to false
	I1008 22:55:41.128040  183746 mustload.go:65] Loading cluster: old-k8s-version-110407
	I1008 22:55:41.128453  183746 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:55:41.128903  183746 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:55:41.148888  183746 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:55:41.149208  183746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:55:41.204560  183746 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-08 22:55:41.194971093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:55:41.205218  183746 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-110407 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1008 22:55:41.208714  183746 out.go:179] * Pausing node old-k8s-version-110407 ... 
	I1008 22:55:41.212419  183746 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:55:41.212762  183746 ssh_runner.go:195] Run: systemctl --version
	I1008 22:55:41.212852  183746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:55:41.230012  183746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:55:41.332089  183746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:55:41.344802  183746 pause.go:52] kubelet running: true
	I1008 22:55:41.344872  183746 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:55:41.577338  183746 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:55:41.577454  183746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:55:41.644924  183746 cri.go:89] found id: "3edfc0b30693211c28865d5219a9146586f475e536683705e62e7fee3cbd1d18"
	I1008 22:55:41.645001  183746 cri.go:89] found id: "d72eb628fb497376f9eefcba7d2f6f36dfe625924dc6e8dd4130842c7d32eee3"
	I1008 22:55:41.645023  183746 cri.go:89] found id: "dd58f37f74850810388d93ef413e3efb1a36fce33e2dc09297330f27a8cbf5c1"
	I1008 22:55:41.645035  183746 cri.go:89] found id: "0e580a0fb08ba90a90f58f3c01972f80ab2064c7b7f180e3447dce96336f16c7"
	I1008 22:55:41.645040  183746 cri.go:89] found id: "1089bc6ec608e4f6ff237f1aa25f35c60b338495b151b22c2b52a17146b6be9c"
	I1008 22:55:41.645051  183746 cri.go:89] found id: "31d5d12b3335847a7a1c8dd5ff7e9ed344177e872405a18bffd7fef7d424e626"
	I1008 22:55:41.645054  183746 cri.go:89] found id: "aff39630382e0b657df55be14c63dfb5df04e731f6be4ae06c64640cbeb9f074"
	I1008 22:55:41.645058  183746 cri.go:89] found id: "e0004b069fee46142e9b07ac07faf8907f947de19c729c522213256f72792263"
	I1008 22:55:41.645062  183746 cri.go:89] found id: "0434b78e1b9c7b8bd208c0bd06784b6ae445fc7d2cf410fea035aea751050584"
	I1008 22:55:41.645069  183746 cri.go:89] found id: "c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f"
	I1008 22:55:41.645076  183746 cri.go:89] found id: "a87a33253a6361e87d6c423aff7d47025b3d27557d3d58981f75a36ab84eb3a8"
	I1008 22:55:41.645079  183746 cri.go:89] found id: ""
	I1008 22:55:41.645141  183746 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:55:41.664348  183746 retry.go:31] will retry after 283.599437ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:55:41Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:55:41.949013  183746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:55:41.963705  183746 pause.go:52] kubelet running: false
	I1008 22:55:41.963812  183746 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:55:42.182607  183746 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:55:42.182729  183746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:55:42.269184  183746 cri.go:89] found id: "3edfc0b30693211c28865d5219a9146586f475e536683705e62e7fee3cbd1d18"
	I1008 22:55:42.269214  183746 cri.go:89] found id: "d72eb628fb497376f9eefcba7d2f6f36dfe625924dc6e8dd4130842c7d32eee3"
	I1008 22:55:42.269220  183746 cri.go:89] found id: "dd58f37f74850810388d93ef413e3efb1a36fce33e2dc09297330f27a8cbf5c1"
	I1008 22:55:42.269225  183746 cri.go:89] found id: "0e580a0fb08ba90a90f58f3c01972f80ab2064c7b7f180e3447dce96336f16c7"
	I1008 22:55:42.269238  183746 cri.go:89] found id: "1089bc6ec608e4f6ff237f1aa25f35c60b338495b151b22c2b52a17146b6be9c"
	I1008 22:55:42.269288  183746 cri.go:89] found id: "31d5d12b3335847a7a1c8dd5ff7e9ed344177e872405a18bffd7fef7d424e626"
	I1008 22:55:42.269301  183746 cri.go:89] found id: "aff39630382e0b657df55be14c63dfb5df04e731f6be4ae06c64640cbeb9f074"
	I1008 22:55:42.269327  183746 cri.go:89] found id: "e0004b069fee46142e9b07ac07faf8907f947de19c729c522213256f72792263"
	I1008 22:55:42.269331  183746 cri.go:89] found id: "0434b78e1b9c7b8bd208c0bd06784b6ae445fc7d2cf410fea035aea751050584"
	I1008 22:55:42.269349  183746 cri.go:89] found id: "c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f"
	I1008 22:55:42.269369  183746 cri.go:89] found id: "a87a33253a6361e87d6c423aff7d47025b3d27557d3d58981f75a36ab84eb3a8"
	I1008 22:55:42.269378  183746 cri.go:89] found id: ""
	I1008 22:55:42.269454  183746 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:55:42.282756  183746 retry.go:31] will retry after 228.423718ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:55:42Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:55:42.512304  183746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:55:42.526001  183746 pause.go:52] kubelet running: false
	I1008 22:55:42.526069  183746 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:55:42.724310  183746 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:55:42.724404  183746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:55:42.809574  183746 cri.go:89] found id: "3edfc0b30693211c28865d5219a9146586f475e536683705e62e7fee3cbd1d18"
	I1008 22:55:42.809600  183746 cri.go:89] found id: "d72eb628fb497376f9eefcba7d2f6f36dfe625924dc6e8dd4130842c7d32eee3"
	I1008 22:55:42.809606  183746 cri.go:89] found id: "dd58f37f74850810388d93ef413e3efb1a36fce33e2dc09297330f27a8cbf5c1"
	I1008 22:55:42.809610  183746 cri.go:89] found id: "0e580a0fb08ba90a90f58f3c01972f80ab2064c7b7f180e3447dce96336f16c7"
	I1008 22:55:42.809614  183746 cri.go:89] found id: "1089bc6ec608e4f6ff237f1aa25f35c60b338495b151b22c2b52a17146b6be9c"
	I1008 22:55:42.809618  183746 cri.go:89] found id: "31d5d12b3335847a7a1c8dd5ff7e9ed344177e872405a18bffd7fef7d424e626"
	I1008 22:55:42.809621  183746 cri.go:89] found id: "aff39630382e0b657df55be14c63dfb5df04e731f6be4ae06c64640cbeb9f074"
	I1008 22:55:42.809624  183746 cri.go:89] found id: "e0004b069fee46142e9b07ac07faf8907f947de19c729c522213256f72792263"
	I1008 22:55:42.809683  183746 cri.go:89] found id: "0434b78e1b9c7b8bd208c0bd06784b6ae445fc7d2cf410fea035aea751050584"
	I1008 22:55:42.809698  183746 cri.go:89] found id: "c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f"
	I1008 22:55:42.809707  183746 cri.go:89] found id: "a87a33253a6361e87d6c423aff7d47025b3d27557d3d58981f75a36ab84eb3a8"
	I1008 22:55:42.809711  183746 cri.go:89] found id: ""
	I1008 22:55:42.809762  183746 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:55:42.825013  183746 out.go:203] 
	W1008 22:55:42.828014  183746 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:55:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:55:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 22:55:42.828039  183746 out.go:285] * 
	* 
	W1008 22:55:42.833411  183746 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:55:42.839153  183746 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-110407 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-110407
helpers_test.go:243: (dbg) docker inspect old-k8s-version-110407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04",
	        "Created": "2025-10-08T22:53:24.5168981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 181556,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:54:43.568309319Z",
	            "FinishedAt": "2025-10-08T22:54:42.7498491Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/hostname",
	        "HostsPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/hosts",
	        "LogPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04-json.log",
	        "Name": "/old-k8s-version-110407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04",
	                "LowerDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110407",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110407",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0fc68605941aba809497f92c326960d331f79f515a0af8e0a5e026f9c621d85d",
	            "SandboxKey": "/var/run/docker/netns/0fc68605941a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:17:01:9c:e8:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed0b9760a08ed8f2576688b000be4aceb5b3090420383440e59b46e430cff699",
	                    "EndpointID": "9044269747b84de3c6e2c45acbfd893247bbdefaab052d42f9f30746cf8157bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-110407",
	                        "164acd06879a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407: exit status 2 (364.065134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-110407 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-110407 logs -n 25: (1.336423637s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-840929 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo containerd config dump                                                                                                                                                                                                  │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo crio config                                                                                                                                                                                                             │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ delete  │ -p cilium-840929                                                                                                                                                                                                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:45 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:46 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ delete  │ -p cert-expiration-292528                                                                                                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	│ delete  │ -p force-systemd-env-092546                                                                                                                                                                                                                   │ force-systemd-env-092546  │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:52 UTC │
	│ start   │ -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ cert-options-378019 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ -p cert-options-378019 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:54:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:54:43.291880  181429 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:54:43.292097  181429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:54:43.292111  181429 out.go:374] Setting ErrFile to fd 2...
	I1008 22:54:43.292116  181429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:54:43.292421  181429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:54:43.292882  181429 out.go:368] Setting JSON to false
	I1008 22:54:43.293902  181429 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5834,"bootTime":1759958250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:54:43.293976  181429 start.go:141] virtualization:  
	I1008 22:54:43.297093  181429 out.go:179] * [old-k8s-version-110407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:54:43.300973  181429 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:54:43.301029  181429 notify.go:220] Checking for updates...
	I1008 22:54:43.307108  181429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:54:43.310222  181429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:54:43.313251  181429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:54:43.316294  181429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:54:43.319354  181429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:54:43.322801  181429 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:54:43.326296  181429 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1008 22:54:43.329144  181429 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:54:43.359671  181429 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:54:43.359865  181429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:54:43.415702  181429 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:54:43.406774105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:54:43.415821  181429 docker.go:318] overlay module found
	I1008 22:54:43.418872  181429 out.go:179] * Using the docker driver based on existing profile
	I1008 22:54:43.421761  181429 start.go:305] selected driver: docker
	I1008 22:54:43.421787  181429 start.go:925] validating driver "docker" against &{Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:54:43.421898  181429 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:54:43.422644  181429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:54:43.478313  181429 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:54:43.46945783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:54:43.478680  181429 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:54:43.478725  181429 cni.go:84] Creating CNI manager for ""
	I1008 22:54:43.478785  181429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:54:43.478827  181429 start.go:349] cluster config:
	{Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:54:43.483826  181429 out.go:179] * Starting "old-k8s-version-110407" primary control-plane node in "old-k8s-version-110407" cluster
	I1008 22:54:43.486603  181429 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:54:43.489474  181429 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:54:43.492237  181429 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:54:43.492297  181429 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1008 22:54:43.492312  181429 cache.go:58] Caching tarball of preloaded images
	I1008 22:54:43.492399  181429 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:54:43.492415  181429 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1008 22:54:43.492576  181429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/config.json ...
	I1008 22:54:43.492813  181429 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:54:43.512640  181429 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:54:43.512666  181429 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:54:43.512690  181429 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:54:43.512714  181429 start.go:360] acquireMachinesLock for old-k8s-version-110407: {Name:mkbaacf9b00bd8ee87fd567c565e6e2b19f705c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:54:43.512781  181429 start.go:364] duration metric: took 42.61µs to acquireMachinesLock for "old-k8s-version-110407"
	I1008 22:54:43.512806  181429 start.go:96] Skipping create...Using existing machine configuration
	I1008 22:54:43.512822  181429 fix.go:54] fixHost starting: 
	I1008 22:54:43.513086  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:43.529814  181429 fix.go:112] recreateIfNeeded on old-k8s-version-110407: state=Stopped err=<nil>
	W1008 22:54:43.529848  181429 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 22:54:43.533101  181429 out.go:252] * Restarting existing docker container for "old-k8s-version-110407" ...
	I1008 22:54:43.533185  181429 cli_runner.go:164] Run: docker start old-k8s-version-110407
	I1008 22:54:43.790501  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:43.810606  181429 kic.go:430] container "old-k8s-version-110407" state is running.
	I1008 22:54:43.811005  181429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:54:43.840696  181429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/config.json ...
	I1008 22:54:43.840927  181429 machine.go:93] provisionDockerMachine start ...
	I1008 22:54:43.840990  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:43.866024  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:43.866348  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:43.866357  181429 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:54:43.867419  181429 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:54:47.017448  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110407
	
	I1008 22:54:47.017473  181429 ubuntu.go:182] provisioning hostname "old-k8s-version-110407"
	I1008 22:54:47.017545  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.037432  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:47.037785  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:47.037806  181429 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-110407 && echo "old-k8s-version-110407" | sudo tee /etc/hostname
	I1008 22:54:47.190909  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110407
	
	I1008 22:54:47.190998  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.209895  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:47.210198  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:47.210221  181429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-110407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-110407/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-110407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:54:47.353993  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:54:47.354018  181429 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:54:47.354057  181429 ubuntu.go:190] setting up certificates
	I1008 22:54:47.354068  181429 provision.go:84] configureAuth start
	I1008 22:54:47.354129  181429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:54:47.371602  181429 provision.go:143] copyHostCerts
	I1008 22:54:47.371666  181429 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:54:47.371690  181429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:54:47.371767  181429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:54:47.371871  181429 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:54:47.371883  181429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:54:47.371911  181429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:54:47.371971  181429 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:54:47.371980  181429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:54:47.372006  181429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:54:47.372057  181429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-110407 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-110407]
	I1008 22:54:47.685626  181429 provision.go:177] copyRemoteCerts
	I1008 22:54:47.685704  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:54:47.685761  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.702881  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:47.806357  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:54:47.823993  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 22:54:47.841802  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 22:54:47.860203  181429 provision.go:87] duration metric: took 506.107685ms to configureAuth
	I1008 22:54:47.860245  181429 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:54:47.860433  181429 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:54:47.860542  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.878060  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:47.878368  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:47.878390  181429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:54:48.187621  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:54:48.187641  181429 machine.go:96] duration metric: took 4.346704471s to provisionDockerMachine
	I1008 22:54:48.187651  181429 start.go:293] postStartSetup for "old-k8s-version-110407" (driver="docker")
	I1008 22:54:48.187663  181429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:54:48.187731  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:54:48.187770  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.207873  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.313353  181429 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:54:48.317724  181429 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:54:48.317752  181429 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:54:48.317763  181429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:54:48.317815  181429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:54:48.317908  181429 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:54:48.318015  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:54:48.325618  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:54:48.343588  181429 start.go:296] duration metric: took 155.922078ms for postStartSetup
	I1008 22:54:48.343710  181429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:54:48.343776  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.360613  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.458877  181429 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:54:48.463750  181429 fix.go:56] duration metric: took 4.950927406s for fixHost
	I1008 22:54:48.463776  181429 start.go:83] releasing machines lock for "old-k8s-version-110407", held for 4.950980634s
	I1008 22:54:48.463844  181429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:54:48.480786  181429 ssh_runner.go:195] Run: cat /version.json
	I1008 22:54:48.480845  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.481117  181429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:54:48.481174  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.499247  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.504671  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.690845  181429 ssh_runner.go:195] Run: systemctl --version
	I1008 22:54:48.697264  181429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:54:48.731831  181429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:54:48.736583  181429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:54:48.736658  181429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:54:48.745090  181429 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 22:54:48.745115  181429 start.go:495] detecting cgroup driver to use...
	I1008 22:54:48.745151  181429 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:54:48.745199  181429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:54:48.761500  181429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:54:48.774900  181429 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:54:48.775013  181429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:54:48.791623  181429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:54:48.805286  181429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:54:48.921505  181429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:54:49.042203  181429 docker.go:234] disabling docker service ...
	I1008 22:54:49.042297  181429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:54:49.057668  181429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:54:49.071356  181429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:54:49.192970  181429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:54:49.302928  181429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:54:49.315944  181429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:54:49.330161  181429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1008 22:54:49.330225  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.339166  181429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:54:49.339315  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.349539  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.359245  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.368387  181429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:54:49.376470  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.385285  181429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.393510  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.404220  181429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:54:49.411975  181429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:54:49.420045  181429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:54:49.545017  181429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:54:49.675649  181429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:54:49.675735  181429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:54:49.679590  181429 start.go:563] Will wait 60s for crictl version
	I1008 22:54:49.679662  181429 ssh_runner.go:195] Run: which crictl
	I1008 22:54:49.683416  181429 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:54:49.712057  181429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:54:49.712142  181429 ssh_runner.go:195] Run: crio --version
	I1008 22:54:49.739173  181429 ssh_runner.go:195] Run: crio --version
	I1008 22:54:49.771087  181429 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1008 22:54:49.774359  181429 cli_runner.go:164] Run: docker network inspect old-k8s-version-110407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:54:49.791222  181429 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:54:49.795000  181429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:54:49.804755  181429 kubeadm.go:883] updating cluster {Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:54:49.804862  181429 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:54:49.804914  181429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:54:49.836826  181429 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:54:49.836852  181429 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:54:49.836905  181429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:54:49.863093  181429 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:54:49.863119  181429 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:54:49.863128  181429 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1008 22:54:49.863255  181429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-110407 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:54:49.863340  181429 ssh_runner.go:195] Run: crio config
	I1008 22:54:49.929581  181429 cni.go:84] Creating CNI manager for ""
	I1008 22:54:49.929616  181429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:54:49.929668  181429 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:54:49.929696  181429 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-110407 NodeName:old-k8s-version-110407 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:54:49.929851  181429 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-110407"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:54:49.929931  181429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1008 22:54:49.937586  181429 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:54:49.937703  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:54:49.945321  181429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1008 22:54:49.958697  181429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:54:49.972202  181429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1008 22:54:49.984632  181429 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:54:49.988291  181429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:54:49.998431  181429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:54:50.126834  181429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:54:50.146906  181429 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407 for IP: 192.168.85.2
	I1008 22:54:50.146971  181429 certs.go:195] generating shared ca certs ...
	I1008 22:54:50.147002  181429 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:50.147162  181429 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:54:50.147240  181429 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:54:50.147265  181429 certs.go:257] generating profile certs ...
	I1008 22:54:50.147378  181429 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.key
	I1008 22:54:50.147475  181429 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key.5d0843e3
	I1008 22:54:50.147552  181429 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key
	I1008 22:54:50.147697  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:54:50.147758  181429 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:54:50.147785  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:54:50.147843  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:54:50.147889  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:54:50.147935  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:54:50.148004  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:54:50.148703  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:54:50.170517  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:54:50.190180  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:54:50.208269  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:54:50.229771  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 22:54:50.258205  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:54:50.283783  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:54:50.311693  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:54:50.341249  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:54:50.384591  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:54:50.404566  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:54:50.427598  181429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:54:50.447699  181429 ssh_runner.go:195] Run: openssl version
	I1008 22:54:50.453960  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:54:50.462887  181429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:54:50.466747  181429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:54:50.466814  181429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:54:50.508076  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:54:50.516156  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:54:50.524736  181429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:54:50.528467  181429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:54:50.528537  181429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:54:50.572271  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:54:50.580521  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:54:50.589081  181429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:54:50.592958  181429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:54:50.593023  181429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:54:50.634526  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:54:50.642696  181429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:54:50.646669  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 22:54:50.694573  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 22:54:50.735927  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 22:54:50.784913  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 22:54:50.830844  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 22:54:50.890508  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 22:54:50.954184  181429 kubeadm.go:400] StartCluster: {Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:54:50.954329  181429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:54:50.954462  181429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:54:51.040876  181429 cri.go:89] found id: "31d5d12b3335847a7a1c8dd5ff7e9ed344177e872405a18bffd7fef7d424e626"
	I1008 22:54:51.040949  181429 cri.go:89] found id: "aff39630382e0b657df55be14c63dfb5df04e731f6be4ae06c64640cbeb9f074"
	I1008 22:54:51.040968  181429 cri.go:89] found id: "e0004b069fee46142e9b07ac07faf8907f947de19c729c522213256f72792263"
	I1008 22:54:51.040987  181429 cri.go:89] found id: "0434b78e1b9c7b8bd208c0bd06784b6ae445fc7d2cf410fea035aea751050584"
	I1008 22:54:51.041026  181429 cri.go:89] found id: ""
	I1008 22:54:51.041143  181429 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 22:54:51.059460  181429 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:54:51Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:54:51.059609  181429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:54:51.073667  181429 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 22:54:51.073741  181429 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 22:54:51.073840  181429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 22:54:51.087025  181429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:54:51.087480  181429 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-110407" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:54:51.087589  181429 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-110407" cluster setting kubeconfig missing "old-k8s-version-110407" context setting]
	I1008 22:54:51.087882  181429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:51.089395  181429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 22:54:51.102317  181429 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 22:54:51.102354  181429 kubeadm.go:601] duration metric: took 28.592424ms to restartPrimaryControlPlane
	I1008 22:54:51.102364  181429 kubeadm.go:402] duration metric: took 148.191748ms to StartCluster
	I1008 22:54:51.102383  181429 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:51.102447  181429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:54:51.103131  181429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:51.103350  181429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:54:51.103657  181429 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:54:51.103707  181429 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:54:51.103775  181429 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-110407"
	I1008 22:54:51.103794  181429 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-110407"
	W1008 22:54:51.103890  181429 addons.go:247] addon storage-provisioner should already be in state true
	I1008 22:54:51.103915  181429 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:51.104665  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.103818  181429 addons.go:69] Setting dashboard=true in profile "old-k8s-version-110407"
	I1008 22:54:51.104950  181429 addons.go:238] Setting addon dashboard=true in "old-k8s-version-110407"
	W1008 22:54:51.104963  181429 addons.go:247] addon dashboard should already be in state true
	I1008 22:54:51.104988  181429 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:51.103828  181429 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-110407"
	I1008 22:54:51.105335  181429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-110407"
	I1008 22:54:51.105568  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.106537  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.109097  181429 out.go:179] * Verifying Kubernetes components...
	I1008 22:54:51.116189  181429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:54:51.157338  181429 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:54:51.161945  181429 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:54:51.161974  181429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:54:51.162046  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:51.183504  181429 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-110407"
	W1008 22:54:51.183528  181429 addons.go:247] addon default-storageclass should already be in state true
	I1008 22:54:51.183552  181429 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:51.183966  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.207573  181429 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 22:54:51.210527  181429 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 22:54:51.215534  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 22:54:51.215567  181429 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 22:54:51.215643  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:51.245869  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:51.255002  181429 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:54:51.255025  181429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:54:51.255091  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:51.261788  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:51.295415  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:51.468204  181429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:54:51.522724  181429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:54:51.528413  181429 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-110407" to be "Ready" ...
	I1008 22:54:51.530843  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 22:54:51.530924  181429 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 22:54:51.531335  181429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:54:51.587613  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 22:54:51.587679  181429 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 22:54:51.660498  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 22:54:51.660572  181429 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 22:54:51.756344  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 22:54:51.756407  181429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 22:54:51.817220  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 22:54:51.817290  181429 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 22:54:51.842287  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 22:54:51.842350  181429 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 22:54:51.864248  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 22:54:51.864318  181429 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 22:54:51.884877  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 22:54:51.884954  181429 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 22:54:51.908058  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:54:51.908139  181429 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 22:54:51.929228  181429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:54:55.241584  181429 node_ready.go:49] node "old-k8s-version-110407" is "Ready"
	I1008 22:54:55.241659  181429 node_ready.go:38] duration metric: took 3.713205436s for node "old-k8s-version-110407" to be "Ready" ...
	I1008 22:54:55.241674  181429 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:54:55.241798  181429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:54:56.815942  181429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.284557108s)
	I1008 22:54:56.816225  181429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.293468087s)
	I1008 22:54:57.290966  181429 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.049133351s)
	I1008 22:54:57.290999  181429 api_server.go:72] duration metric: took 6.187615746s to wait for apiserver process to appear ...
	I1008 22:54:57.291007  181429 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:54:57.291034  181429 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:54:57.291552  181429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.362231671s)
	I1008 22:54:57.294710  181429 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-110407 addons enable metrics-server
	
	I1008 22:54:57.297714  181429 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1008 22:54:57.301404  181429 addons.go:514] duration metric: took 6.197695068s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1008 22:54:57.302259  181429 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 22:54:57.303736  181429 api_server.go:141] control plane version: v1.28.0
	I1008 22:54:57.303763  181429 api_server.go:131] duration metric: took 12.749998ms to wait for apiserver health ...
	I1008 22:54:57.303772  181429 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:54:57.308500  181429 system_pods.go:59] 8 kube-system pods found
	I1008 22:54:57.308543  181429 system_pods.go:61] "coredns-5dd5756b68-p9wsf" [94a25734-c268-4a26-8995-467082f156ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:54:57.308552  181429 system_pods.go:61] "etcd-old-k8s-version-110407" [9341d2d7-8457-4042-953a-042454abf107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:54:57.308558  181429 system_pods.go:61] "kindnet-dzbkd" [293adcb3-a304-42a9-8533-ef23cf040ea6] Running
	I1008 22:54:57.308565  181429 system_pods.go:61] "kube-apiserver-old-k8s-version-110407" [ecdf237a-8269-4ebc-a83b-0f08d6f8157f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:54:57.308572  181429 system_pods.go:61] "kube-controller-manager-old-k8s-version-110407" [8f6a76d5-b9f0-494e-91b8-f3800acb243c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:54:57.308577  181429 system_pods.go:61] "kube-proxy-gsbl4" [cccf2800-b3c8-4684-bc54-d88b59e04bb6] Running
	I1008 22:54:57.308589  181429 system_pods.go:61] "kube-scheduler-old-k8s-version-110407" [695e473d-ed17-4a6f-ada7-b54cde1e5ddc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:54:57.308602  181429 system_pods.go:61] "storage-provisioner" [6105db1d-9197-46c6-8ae0-49fe2291d679] Running
	I1008 22:54:57.308608  181429 system_pods.go:74] duration metric: took 4.83158ms to wait for pod list to return data ...
	I1008 22:54:57.308621  181429 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:54:57.314845  181429 default_sa.go:45] found service account: "default"
	I1008 22:54:57.314875  181429 default_sa.go:55] duration metric: took 6.247154ms for default service account to be created ...
	I1008 22:54:57.314886  181429 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:54:57.319333  181429 system_pods.go:86] 8 kube-system pods found
	I1008 22:54:57.319369  181429 system_pods.go:89] "coredns-5dd5756b68-p9wsf" [94a25734-c268-4a26-8995-467082f156ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:54:57.319380  181429 system_pods.go:89] "etcd-old-k8s-version-110407" [9341d2d7-8457-4042-953a-042454abf107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:54:57.319387  181429 system_pods.go:89] "kindnet-dzbkd" [293adcb3-a304-42a9-8533-ef23cf040ea6] Running
	I1008 22:54:57.319395  181429 system_pods.go:89] "kube-apiserver-old-k8s-version-110407" [ecdf237a-8269-4ebc-a83b-0f08d6f8157f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:54:57.319405  181429 system_pods.go:89] "kube-controller-manager-old-k8s-version-110407" [8f6a76d5-b9f0-494e-91b8-f3800acb243c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:54:57.319419  181429 system_pods.go:89] "kube-proxy-gsbl4" [cccf2800-b3c8-4684-bc54-d88b59e04bb6] Running
	I1008 22:54:57.319427  181429 system_pods.go:89] "kube-scheduler-old-k8s-version-110407" [695e473d-ed17-4a6f-ada7-b54cde1e5ddc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:54:57.319437  181429 system_pods.go:89] "storage-provisioner" [6105db1d-9197-46c6-8ae0-49fe2291d679] Running
	I1008 22:54:57.319444  181429 system_pods.go:126] duration metric: took 4.552841ms to wait for k8s-apps to be running ...
	I1008 22:54:57.319454  181429 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:54:57.319508  181429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:54:57.334425  181429 system_svc.go:56] duration metric: took 14.961938ms WaitForService to wait for kubelet
	I1008 22:54:57.334456  181429 kubeadm.go:586] duration metric: took 6.231071448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:54:57.334474  181429 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:54:57.337035  181429 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:54:57.337070  181429 node_conditions.go:123] node cpu capacity is 2
	I1008 22:54:57.337082  181429 node_conditions.go:105] duration metric: took 2.60291ms to run NodePressure ...
	I1008 22:54:57.337095  181429 start.go:241] waiting for startup goroutines ...
	I1008 22:54:57.337103  181429 start.go:246] waiting for cluster config update ...
	I1008 22:54:57.337114  181429 start.go:255] writing updated cluster config ...
	I1008 22:54:57.337398  181429 ssh_runner.go:195] Run: rm -f paused
	I1008 22:54:57.341370  181429 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:54:57.346941  181429 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-p9wsf" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 22:54:59.353849  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:01.853023  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:04.352864  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:06.852953  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:08.854456  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:11.352636  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:13.354075  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:15.854455  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:18.353001  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:20.353334  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:22.855536  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:25.352938  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	I1008 22:55:26.853605  181429 pod_ready.go:94] pod "coredns-5dd5756b68-p9wsf" is "Ready"
	I1008 22:55:26.853671  181429 pod_ready.go:86] duration metric: took 29.506703759s for pod "coredns-5dd5756b68-p9wsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.856697  181429 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.861541  181429 pod_ready.go:94] pod "etcd-old-k8s-version-110407" is "Ready"
	I1008 22:55:26.861571  181429 pod_ready.go:86] duration metric: took 4.848401ms for pod "etcd-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.864941  181429 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.870530  181429 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-110407" is "Ready"
	I1008 22:55:26.870560  181429 pod_ready.go:86] duration metric: took 5.590924ms for pod "kube-apiserver-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.873687  181429 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.050298  181429 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-110407" is "Ready"
	I1008 22:55:27.050330  181429 pod_ready.go:86] duration metric: took 176.61453ms for pod "kube-controller-manager-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.250955  181429 pod_ready.go:83] waiting for pod "kube-proxy-gsbl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.650544  181429 pod_ready.go:94] pod "kube-proxy-gsbl4" is "Ready"
	I1008 22:55:27.650572  181429 pod_ready.go:86] duration metric: took 399.591208ms for pod "kube-proxy-gsbl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.852083  181429 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:28.250749  181429 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-110407" is "Ready"
	I1008 22:55:28.250778  181429 pod_ready.go:86] duration metric: took 398.65724ms for pod "kube-scheduler-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:28.250791  181429 pod_ready.go:40] duration metric: took 30.909384228s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:55:28.309380  181429 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1008 22:55:28.312305  181429 out.go:203] 
	W1008 22:55:28.315373  181429 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1008 22:55:28.318253  181429 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1008 22:55:28.321132  181429 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-110407" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.483481147Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58d419bc-2db2-4d97-b8e2-337ad5633f44 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.486614681Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a5eca641-b41a-4e76-bb41-c17cb617aa43 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.491333833Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper" id=c24cd51f-ee84-4784-a0d1-cea5f736a40e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.491604844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.499193683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.499880624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.531633889Z" level=info msg="Created container c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper" id=c24cd51f-ee84-4784-a0d1-cea5f736a40e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.534069585Z" level=info msg="Starting container: c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f" id=87160fbb-1801-4c32-b31c-fd26d04c3278 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.535958887Z" level=info msg="Started container" PID=1639 containerID=c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper id=87160fbb-1801-4c32-b31c-fd26d04c3278 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f
	Oct 08 22:55:28 old-k8s-version-110407 conmon[1636]: conmon c246c8270cd890985c9f <ninfo>: container 1639 exited with status 1
	Oct 08 22:55:29 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:29.531931248Z" level=info msg="Removing container: 68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881" id=ad948224-dd7a-431d-8252-6a7e0f3e874b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:55:29 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:29.541966647Z" level=info msg="Error loading conmon cgroup of container 68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881: cgroup deleted" id=ad948224-dd7a-431d-8252-6a7e0f3e874b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:55:29 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:29.547081891Z" level=info msg="Removed container 68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper" id=ad948224-dd7a-431d-8252-6a7e0f3e874b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.207945809Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.212229781Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.212411642Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.212450871Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.215747837Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.215780814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.215803961Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.21900815Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.219044253Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.219071051Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.222612343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.222654477Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c246c8270cd89       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   77e405280b1e5       dashboard-metrics-scraper-5f989dc9cf-9nlr6       kubernetes-dashboard
	3edfc0b306932       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   db92f45385f10       storage-provisioner                              kube-system
	a87a33253a636       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   a486ba6d80c07       kubernetes-dashboard-8694d4445c-wfmhw            kubernetes-dashboard
	9af0417f322b6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   31cdd7240bbbe       busybox                                          default
	d72eb628fb497       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           48 seconds ago      Running             coredns                     1                   02e73c19aab54       coredns-5dd5756b68-p9wsf                         kube-system
	dd58f37f74850       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   b9bc32a8584d8       kindnet-dzbkd                                    kube-system
	0e580a0fb08ba       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           48 seconds ago      Running             kube-proxy                  1                   2987aeced0dc5       kube-proxy-gsbl4                                 kube-system
	1089bc6ec608e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   db92f45385f10       storage-provisioner                              kube-system
	31d5d12b33358       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           53 seconds ago      Running             kube-apiserver              1                   7ba2aca9cbd2e       kube-apiserver-old-k8s-version-110407            kube-system
	aff39630382e0       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           53 seconds ago      Running             kube-controller-manager     1                   78d95e6215fc3       kube-controller-manager-old-k8s-version-110407   kube-system
	e0004b069fee4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           53 seconds ago      Running             etcd                        1                   837d7d22aa22c       etcd-old-k8s-version-110407                      kube-system
	0434b78e1b9c7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           53 seconds ago      Running             kube-scheduler              1                   64eeda4998dfa       kube-scheduler-old-k8s-version-110407            kube-system
	
	
	==> coredns [d72eb628fb497376f9eefcba7d2f6f36dfe625924dc6e8dd4130842c7d32eee3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55707 - 61582 "HINFO IN 6697083693980746306.7114089042911625345. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.075436956s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-110407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-110407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=old-k8s-version-110407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_53_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:53:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-110407
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:55:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:54:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-110407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2f6f93ebbc24529ad1c0a658632a5da
	  System UUID:                8dba2821-4735-44a2-98ca-98cb78fcdea2
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-p9wsf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     103s
	  kube-system                 etcd-old-k8s-version-110407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-dzbkd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-old-k8s-version-110407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-old-k8s-version-110407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-gsbl4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-old-k8s-version-110407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-9nlr6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wfmhw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node old-k8s-version-110407 event: Registered Node old-k8s-version-110407 in Controller
	  Normal  NodeReady                89s                  kubelet          Node old-k8s-version-110407 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                  node-controller  Node old-k8s-version-110407 event: Registered Node old-k8s-version-110407 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:22] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:27] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e0004b069fee46142e9b07ac07faf8907f947de19c729c522213256f72792263] <==
	{"level":"info","ts":"2025-10-08T22:54:51.083546Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-08T22:54:51.083621Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-08T22:54:51.084056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-08T22:54:51.093265Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-08T22:54:51.093386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:54:51.093424Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:54:51.095309Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-08T22:54:51.095526Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-08T22:54:51.095558Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-08T22:54:51.09563Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-08T22:54:51.095643Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-08T22:54:52.493662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-08T22:54:52.493714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-08T22:54:52.493749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-08T22:54:52.493764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.493771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.49378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.493788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.499733Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-110407 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-08T22:54:52.499784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T22:54:52.499972Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-08T22:54:52.500054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-08T22:54:52.501251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-08T22:54:52.499803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T22:54:52.505815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:55:44 up  1:38,  0 user,  load average: 1.27, 1.34, 1.66
	Linux old-k8s-version-110407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd58f37f74850810388d93ef413e3efb1a36fce33e2dc09297330f27a8cbf5c1] <==
	I1008 22:54:55.916361       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:54:56.002436       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 22:54:56.002702       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:54:56.002752       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:54:56.003052       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:54:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:54:56.203830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:54:56.203847       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:54:56.203855       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:54:56.204136       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:55:26.203954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:55:26.203954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:55:26.204065       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 22:55:26.204936       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 22:55:27.504093       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:55:27.504192       1 metrics.go:72] Registering metrics
	I1008 22:55:27.504291       1 controller.go:711] "Syncing nftables rules"
	I1008 22:55:36.207646       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:55:36.207684       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31d5d12b3335847a7a1c8dd5ff7e9ed344177e872405a18bffd7fef7d424e626] <==
	I1008 22:54:55.300044       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1008 22:54:55.317777       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1008 22:54:55.318005       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1008 22:54:55.318023       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1008 22:54:55.319371       1 aggregator.go:166] initial CRD sync complete...
	I1008 22:54:55.319399       1 autoregister_controller.go:141] Starting autoregister controller
	I1008 22:54:55.319405       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 22:54:55.320503       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1008 22:54:55.322562       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:54:55.366479       1 shared_informer.go:318] Caches are synced for configmaps
	I1008 22:54:55.376935       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 22:54:55.390931       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1008 22:54:55.425560       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 22:54:55.431420       1 cache.go:39] Caches are synced for autoregister controller
	I1008 22:54:56.007772       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:54:57.090044       1 controller.go:624] quota admission added evaluator for: namespaces
	I1008 22:54:57.140978       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1008 22:54:57.168982       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:54:57.182058       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:54:57.191969       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1008 22:54:57.259563       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.76.243"}
	I1008 22:54:57.283386       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.119.174"}
	I1008 22:55:07.977184       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:55:08.026775       1 controller.go:624] quota admission added evaluator for: endpoints
	I1008 22:55:08.130464       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [aff39630382e0b657df55be14c63dfb5df04e731f6be4ae06c64640cbeb9f074] <==
	I1008 22:55:07.934231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.433µs"
	I1008 22:55:08.137529       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1008 22:55:08.143965       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1008 22:55:08.160966       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-wfmhw"
	I1008 22:55:08.160993       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-9nlr6"
	I1008 22:55:08.161908       1 shared_informer.go:318] Caches are synced for garbage collector
	I1008 22:55:08.165415       1 shared_informer.go:318] Caches are synced for garbage collector
	I1008 22:55:08.165503       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1008 22:55:08.169799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.560823ms"
	I1008 22:55:08.181445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.623728ms"
	I1008 22:55:08.191801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.492714ms"
	I1008 22:55:08.191914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.407µs"
	I1008 22:55:08.199732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.23309ms"
	I1008 22:55:08.199824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.391µs"
	I1008 22:55:08.203528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.359µs"
	I1008 22:55:08.218735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.023µs"
	I1008 22:55:13.543806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.263866ms"
	I1008 22:55:13.543927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.15µs"
	I1008 22:55:17.510077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.139µs"
	I1008 22:55:18.522842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.989µs"
	I1008 22:55:19.522110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.367µs"
	I1008 22:55:26.577926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.581079ms"
	I1008 22:55:26.578183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.307µs"
	I1008 22:55:29.550062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.857µs"
	I1008 22:55:38.498852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.284µs"
	
	
	==> kube-proxy [0e580a0fb08ba90a90f58f3c01972f80ab2064c7b7f180e3447dce96336f16c7] <==
	I1008 22:54:56.018867       1 server_others.go:69] "Using iptables proxy"
	I1008 22:54:56.056121       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1008 22:54:56.243032       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:54:56.254091       1 server_others.go:152] "Using iptables Proxier"
	I1008 22:54:56.254130       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1008 22:54:56.254139       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1008 22:54:56.254170       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1008 22:54:56.254397       1 server.go:846] "Version info" version="v1.28.0"
	I1008 22:54:56.254408       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:54:56.255610       1 config.go:188] "Starting service config controller"
	I1008 22:54:56.255622       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1008 22:54:56.255638       1 config.go:97] "Starting endpoint slice config controller"
	I1008 22:54:56.255642       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1008 22:54:56.255992       1 config.go:315] "Starting node config controller"
	I1008 22:54:56.255998       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1008 22:54:56.356041       1 shared_informer.go:318] Caches are synced for node config
	I1008 22:54:56.356082       1 shared_informer.go:318] Caches are synced for service config
	I1008 22:54:56.356113       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0434b78e1b9c7b8bd208c0bd06784b6ae445fc7d2cf410fea035aea751050584] <==
	I1008 22:54:53.718786       1 serving.go:348] Generated self-signed cert in-memory
	W1008 22:54:55.206068       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 22:54:55.206179       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 22:54:55.206212       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 22:54:55.206253       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 22:54:55.303843       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1008 22:54:55.303941       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:54:55.309717       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:54:55.309780       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 22:54:55.309976       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1008 22:54:55.310090       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1008 22:54:55.411334       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: I1008 22:55:08.307247     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c8d79e86-94d5-4b6b-ba36-3245de9e0ae5-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-9nlr6\" (UID: \"c8d79e86-94d5-4b6b-ba36-3245de9e0ae5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6"
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: I1008 22:55:08.307282     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496rj\" (UniqueName: \"kubernetes.io/projected/c8d79e86-94d5-4b6b-ba36-3245de9e0ae5-kube-api-access-496rj\") pod \"dashboard-metrics-scraper-5f989dc9cf-9nlr6\" (UID: \"c8d79e86-94d5-4b6b-ba36-3245de9e0ae5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6"
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: W1008 22:55:08.526431     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-a486ba6d80c078f99c70dd996e78f2fa97390883c0d087567fd3e03ec7eb6413 WatchSource:0}: Error finding container a486ba6d80c078f99c70dd996e78f2fa97390883c0d087567fd3e03ec7eb6413: Status 404 returned error can't find the container with id a486ba6d80c078f99c70dd996e78f2fa97390883c0d087567fd3e03ec7eb6413
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: W1008 22:55:08.532020     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f WatchSource:0}: Error finding container 77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f: Status 404 returned error can't find the container with id 77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f
	Oct 08 22:55:17 old-k8s-version-110407 kubelet[774]: I1008 22:55:17.492881     774 scope.go:117] "RemoveContainer" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:17 old-k8s-version-110407 kubelet[774]: I1008 22:55:17.508418     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wfmhw" podStartSLOduration=5.04770703 podCreationTimestamp="2025-10-08 22:55:08 +0000 UTC" firstStartedPulling="2025-10-08 22:55:08.530941212 +0000 UTC m=+18.384849969" lastFinishedPulling="2025-10-08 22:55:12.991589415 +0000 UTC m=+22.845498205" observedRunningTime="2025-10-08 22:55:13.502163216 +0000 UTC m=+23.356072055" watchObservedRunningTime="2025-10-08 22:55:17.508355266 +0000 UTC m=+27.362264023"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.497700     774 scope.go:117] "RemoveContainer" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.498257     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: E1008 22:55:18.498613     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.517440     774 scope.go:117] "RemoveContainer" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: E1008 22:55:18.518116     774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046\": container with ID starting with 532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046 not found: ID does not exist" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.518349     774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"} err="failed to get container status \"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046\": rpc error: code = NotFound desc = could not find container \"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046\": container with ID starting with 532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046 not found: ID does not exist"
	Oct 08 22:55:19 old-k8s-version-110407 kubelet[774]: I1008 22:55:19.501239     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:19 old-k8s-version-110407 kubelet[774]: E1008 22:55:19.502083     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:26 old-k8s-version-110407 kubelet[774]: I1008 22:55:26.518371     774 scope.go:117] "RemoveContainer" containerID="1089bc6ec608e4f6ff237f1aa25f35c60b338495b151b22c2b52a17146b6be9c"
	Oct 08 22:55:28 old-k8s-version-110407 kubelet[774]: I1008 22:55:28.482471     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:29 old-k8s-version-110407 kubelet[774]: I1008 22:55:29.528640     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:29 old-k8s-version-110407 kubelet[774]: I1008 22:55:29.528925     774 scope.go:117] "RemoveContainer" containerID="c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f"
	Oct 08 22:55:29 old-k8s-version-110407 kubelet[774]: E1008 22:55:29.529237     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:38 old-k8s-version-110407 kubelet[774]: I1008 22:55:38.482564     774 scope.go:117] "RemoveContainer" containerID="c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f"
	Oct 08 22:55:38 old-k8s-version-110407 kubelet[774]: E1008 22:55:38.483357     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:41 old-k8s-version-110407 kubelet[774]: I1008 22:55:41.521099     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 08 22:55:41 old-k8s-version-110407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 22:55:41 old-k8s-version-110407 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 22:55:41 old-k8s-version-110407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a87a33253a6361e87d6c423aff7d47025b3d27557d3d58981f75a36ab84eb3a8] <==
	2025/10/08 22:55:13 Starting overwatch
	2025/10/08 22:55:13 Using namespace: kubernetes-dashboard
	2025/10/08 22:55:13 Using in-cluster config to connect to apiserver
	2025/10/08 22:55:13 Using secret token for csrf signing
	2025/10/08 22:55:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 22:55:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 22:55:13 Successful initial request to the apiserver, version: v1.28.0
	2025/10/08 22:55:13 Generating JWE encryption key
	2025/10/08 22:55:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 22:55:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 22:55:15 Initializing JWE encryption key from synchronized object
	2025/10/08 22:55:15 Creating in-cluster Sidecar client
	2025/10/08 22:55:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 22:55:15 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [1089bc6ec608e4f6ff237f1aa25f35c60b338495b151b22c2b52a17146b6be9c] <==
	I1008 22:54:55.926000       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 22:55:25.928401       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3edfc0b30693211c28865d5219a9146586f475e536683705e62e7fee3cbd1d18] <==
	I1008 22:55:26.575707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:55:26.594695       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:55:26.594839       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 22:55:43.994967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:55:43.995124       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-110407_b9ed06f4-aaf9-4de2-ac97-3f9149e8a08a!
	I1008 22:55:43.996044       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f2116a6-5967-4c8b-a3c3-8076bb9f79ff", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-110407_b9ed06f4-aaf9-4de2-ac97-3f9149e8a08a became leader
	I1008 22:55:44.095318       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-110407_b9ed06f4-aaf9-4de2-ac97-3f9149e8a08a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110407 -n old-k8s-version-110407
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110407 -n old-k8s-version-110407: exit status 2 (349.921766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-110407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-110407
helpers_test.go:243: (dbg) docker inspect old-k8s-version-110407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04",
	        "Created": "2025-10-08T22:53:24.5168981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 181556,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:54:43.568309319Z",
	            "FinishedAt": "2025-10-08T22:54:42.7498491Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/hostname",
	        "HostsPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/hosts",
	        "LogPath": "/var/lib/docker/containers/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04-json.log",
	        "Name": "/old-k8s-version-110407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04",
	                "LowerDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33c1f16960b1f6e4667df0689452ae06b880eaf0335fc73be46c893ca7d8ce69/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110407",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110407",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0fc68605941aba809497f92c326960d331f79f515a0af8e0a5e026f9c621d85d",
	            "SandboxKey": "/var/run/docker/netns/0fc68605941a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:17:01:9c:e8:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ed0b9760a08ed8f2576688b000be4aceb5b3090420383440e59b46e430cff699",
	                    "EndpointID": "9044269747b84de3c6e2c45acbfd893247bbdefaab052d42f9f30746cf8157bb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-110407",
	                        "164acd06879a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407: exit status 2 (404.936946ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-110407 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-110407 logs -n 25: (1.227351917s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-840929 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo containerd config dump                                                                                                                                                                                                  │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo crio config                                                                                                                                                                                                             │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ delete  │ -p cilium-840929                                                                                                                                                                                                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:45 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:46 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ delete  │ -p cert-expiration-292528                                                                                                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	│ delete  │ -p force-systemd-env-092546                                                                                                                                                                                                                   │ force-systemd-env-092546  │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:52 UTC │
	│ start   │ -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ cert-options-378019 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ -p cert-options-378019 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:54:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:54:43.291880  181429 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:54:43.292097  181429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:54:43.292111  181429 out.go:374] Setting ErrFile to fd 2...
	I1008 22:54:43.292116  181429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:54:43.292421  181429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:54:43.292882  181429 out.go:368] Setting JSON to false
	I1008 22:54:43.293902  181429 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5834,"bootTime":1759958250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:54:43.293976  181429 start.go:141] virtualization:  
	I1008 22:54:43.297093  181429 out.go:179] * [old-k8s-version-110407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:54:43.300973  181429 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:54:43.301029  181429 notify.go:220] Checking for updates...
	I1008 22:54:43.307108  181429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:54:43.310222  181429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:54:43.313251  181429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:54:43.316294  181429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:54:43.319354  181429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:54:43.322801  181429 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:54:43.326296  181429 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1008 22:54:43.329144  181429 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:54:43.359671  181429 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:54:43.359865  181429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:54:43.415702  181429 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:54:43.406774105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:54:43.415821  181429 docker.go:318] overlay module found
	I1008 22:54:43.418872  181429 out.go:179] * Using the docker driver based on existing profile
	I1008 22:54:43.421761  181429 start.go:305] selected driver: docker
	I1008 22:54:43.421787  181429 start.go:925] validating driver "docker" against &{Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:54:43.421898  181429 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:54:43.422644  181429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:54:43.478313  181429 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:54:43.46945783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:54:43.478680  181429 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:54:43.478725  181429 cni.go:84] Creating CNI manager for ""
	I1008 22:54:43.478785  181429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:54:43.478827  181429 start.go:349] cluster config:
	{Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:54:43.483826  181429 out.go:179] * Starting "old-k8s-version-110407" primary control-plane node in "old-k8s-version-110407" cluster
	I1008 22:54:43.486603  181429 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:54:43.489474  181429 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:54:43.492237  181429 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:54:43.492297  181429 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1008 22:54:43.492312  181429 cache.go:58] Caching tarball of preloaded images
	I1008 22:54:43.492399  181429 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:54:43.492415  181429 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1008 22:54:43.492576  181429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/config.json ...
	I1008 22:54:43.492813  181429 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:54:43.512640  181429 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:54:43.512666  181429 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:54:43.512690  181429 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:54:43.512714  181429 start.go:360] acquireMachinesLock for old-k8s-version-110407: {Name:mkbaacf9b00bd8ee87fd567c565e6e2b19f705c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:54:43.512781  181429 start.go:364] duration metric: took 42.61µs to acquireMachinesLock for "old-k8s-version-110407"
	I1008 22:54:43.512806  181429 start.go:96] Skipping create...Using existing machine configuration
	I1008 22:54:43.512822  181429 fix.go:54] fixHost starting: 
	I1008 22:54:43.513086  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:43.529814  181429 fix.go:112] recreateIfNeeded on old-k8s-version-110407: state=Stopped err=<nil>
	W1008 22:54:43.529848  181429 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 22:54:43.533101  181429 out.go:252] * Restarting existing docker container for "old-k8s-version-110407" ...
	I1008 22:54:43.533185  181429 cli_runner.go:164] Run: docker start old-k8s-version-110407
	I1008 22:54:43.790501  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:43.810606  181429 kic.go:430] container "old-k8s-version-110407" state is running.
	I1008 22:54:43.811005  181429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:54:43.840696  181429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/config.json ...
	I1008 22:54:43.840927  181429 machine.go:93] provisionDockerMachine start ...
	I1008 22:54:43.840990  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:43.866024  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:43.866348  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:43.866357  181429 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:54:43.867419  181429 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:54:47.017448  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110407
	
	I1008 22:54:47.017473  181429 ubuntu.go:182] provisioning hostname "old-k8s-version-110407"
	I1008 22:54:47.017545  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.037432  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:47.037785  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:47.037806  181429 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-110407 && echo "old-k8s-version-110407" | sudo tee /etc/hostname
	I1008 22:54:47.190909  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110407
	
	I1008 22:54:47.190998  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.209895  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:47.210198  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:47.210221  181429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-110407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-110407/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-110407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:54:47.353993  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:54:47.354018  181429 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:54:47.354057  181429 ubuntu.go:190] setting up certificates
	I1008 22:54:47.354068  181429 provision.go:84] configureAuth start
	I1008 22:54:47.354129  181429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:54:47.371602  181429 provision.go:143] copyHostCerts
	I1008 22:54:47.371666  181429 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:54:47.371690  181429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:54:47.371767  181429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:54:47.371871  181429 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:54:47.371883  181429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:54:47.371911  181429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:54:47.371971  181429 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:54:47.371980  181429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:54:47.372006  181429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:54:47.372057  181429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-110407 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-110407]
	I1008 22:54:47.685626  181429 provision.go:177] copyRemoteCerts
	I1008 22:54:47.685704  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:54:47.685761  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.702881  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:47.806357  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:54:47.823993  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1008 22:54:47.841802  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 22:54:47.860203  181429 provision.go:87] duration metric: took 506.107685ms to configureAuth
	I1008 22:54:47.860245  181429 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:54:47.860433  181429 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:54:47.860542  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:47.878060  181429 main.go:141] libmachine: Using SSH client type: native
	I1008 22:54:47.878368  181429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33056 <nil> <nil>}
	I1008 22:54:47.878390  181429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:54:48.187621  181429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:54:48.187641  181429 machine.go:96] duration metric: took 4.346704471s to provisionDockerMachine
	I1008 22:54:48.187651  181429 start.go:293] postStartSetup for "old-k8s-version-110407" (driver="docker")
	I1008 22:54:48.187663  181429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:54:48.187731  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:54:48.187770  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.207873  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.313353  181429 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:54:48.317724  181429 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:54:48.317752  181429 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:54:48.317763  181429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:54:48.317815  181429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:54:48.317908  181429 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:54:48.318015  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:54:48.325618  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:54:48.343588  181429 start.go:296] duration metric: took 155.922078ms for postStartSetup
	I1008 22:54:48.343710  181429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:54:48.343776  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.360613  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.458877  181429 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:54:48.463750  181429 fix.go:56] duration metric: took 4.950927406s for fixHost
	I1008 22:54:48.463776  181429 start.go:83] releasing machines lock for "old-k8s-version-110407", held for 4.950980634s
	I1008 22:54:48.463844  181429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110407
	I1008 22:54:48.480786  181429 ssh_runner.go:195] Run: cat /version.json
	I1008 22:54:48.480845  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.481117  181429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:54:48.481174  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:48.499247  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.504671  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:48.690845  181429 ssh_runner.go:195] Run: systemctl --version
	I1008 22:54:48.697264  181429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:54:48.731831  181429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:54:48.736583  181429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:54:48.736658  181429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:54:48.745090  181429 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 22:54:48.745115  181429 start.go:495] detecting cgroup driver to use...
	I1008 22:54:48.745151  181429 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:54:48.745199  181429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:54:48.761500  181429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:54:48.774900  181429 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:54:48.775013  181429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:54:48.791623  181429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:54:48.805286  181429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:54:48.921505  181429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:54:49.042203  181429 docker.go:234] disabling docker service ...
	I1008 22:54:49.042297  181429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:54:49.057668  181429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:54:49.071356  181429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:54:49.192970  181429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:54:49.302928  181429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:54:49.315944  181429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:54:49.330161  181429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1008 22:54:49.330225  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.339166  181429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:54:49.339315  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.349539  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.359245  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.368387  181429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:54:49.376470  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.385285  181429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.393510  181429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:54:49.404220  181429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:54:49.411975  181429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:54:49.420045  181429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:54:49.545017  181429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:54:49.675649  181429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:54:49.675735  181429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:54:49.679590  181429 start.go:563] Will wait 60s for crictl version
	I1008 22:54:49.679662  181429 ssh_runner.go:195] Run: which crictl
	I1008 22:54:49.683416  181429 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:54:49.712057  181429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:54:49.712142  181429 ssh_runner.go:195] Run: crio --version
	I1008 22:54:49.739173  181429 ssh_runner.go:195] Run: crio --version
	I1008 22:54:49.771087  181429 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1008 22:54:49.774359  181429 cli_runner.go:164] Run: docker network inspect old-k8s-version-110407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:54:49.791222  181429 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:54:49.795000  181429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:54:49.804755  181429 kubeadm.go:883] updating cluster {Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:54:49.804862  181429 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 22:54:49.804914  181429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:54:49.836826  181429 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:54:49.836852  181429 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:54:49.836905  181429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:54:49.863093  181429 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:54:49.863119  181429 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:54:49.863128  181429 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1008 22:54:49.863255  181429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-110407 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:54:49.863340  181429 ssh_runner.go:195] Run: crio config
	I1008 22:54:49.929581  181429 cni.go:84] Creating CNI manager for ""
	I1008 22:54:49.929616  181429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:54:49.929668  181429 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:54:49.929696  181429 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-110407 NodeName:old-k8s-version-110407 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:54:49.929851  181429 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-110407"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:54:49.929931  181429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1008 22:54:49.937586  181429 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:54:49.937703  181429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:54:49.945321  181429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1008 22:54:49.958697  181429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:54:49.972202  181429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1008 22:54:49.984632  181429 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:54:49.988291  181429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:54:49.998431  181429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:54:50.126834  181429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:54:50.146906  181429 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407 for IP: 192.168.85.2
	I1008 22:54:50.146971  181429 certs.go:195] generating shared ca certs ...
	I1008 22:54:50.147002  181429 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:50.147162  181429 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:54:50.147240  181429 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:54:50.147265  181429 certs.go:257] generating profile certs ...
	I1008 22:54:50.147378  181429 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.key
	I1008 22:54:50.147475  181429 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key.5d0843e3
	I1008 22:54:50.147552  181429 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key
	I1008 22:54:50.147697  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:54:50.147758  181429 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:54:50.147785  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:54:50.147843  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:54:50.147889  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:54:50.147935  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:54:50.148004  181429 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:54:50.148703  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:54:50.170517  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:54:50.190180  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:54:50.208269  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:54:50.229771  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 22:54:50.258205  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:54:50.283783  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:54:50.311693  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:54:50.341249  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:54:50.384591  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:54:50.404566  181429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:54:50.427598  181429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:54:50.447699  181429 ssh_runner.go:195] Run: openssl version
	I1008 22:54:50.453960  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:54:50.462887  181429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:54:50.466747  181429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:54:50.466814  181429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:54:50.508076  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:54:50.516156  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:54:50.524736  181429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:54:50.528467  181429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:54:50.528537  181429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:54:50.572271  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:54:50.580521  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:54:50.589081  181429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:54:50.592958  181429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:54:50.593023  181429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:54:50.634526  181429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:54:50.642696  181429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:54:50.646669  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 22:54:50.694573  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 22:54:50.735927  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 22:54:50.784913  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 22:54:50.830844  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 22:54:50.890508  181429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 22:54:50.954184  181429 kubeadm.go:400] StartCluster: {Name:old-k8s-version-110407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-110407 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:54:50.954329  181429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:54:50.954462  181429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:54:51.040876  181429 cri.go:89] found id: "31d5d12b3335847a7a1c8dd5ff7e9ed344177e872405a18bffd7fef7d424e626"
	I1008 22:54:51.040949  181429 cri.go:89] found id: "aff39630382e0b657df55be14c63dfb5df04e731f6be4ae06c64640cbeb9f074"
	I1008 22:54:51.040968  181429 cri.go:89] found id: "e0004b069fee46142e9b07ac07faf8907f947de19c729c522213256f72792263"
	I1008 22:54:51.040987  181429 cri.go:89] found id: "0434b78e1b9c7b8bd208c0bd06784b6ae445fc7d2cf410fea035aea751050584"
	I1008 22:54:51.041026  181429 cri.go:89] found id: ""
	I1008 22:54:51.041143  181429 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 22:54:51.059460  181429 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:54:51Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:54:51.059609  181429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:54:51.073667  181429 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 22:54:51.073741  181429 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 22:54:51.073840  181429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 22:54:51.087025  181429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:54:51.087480  181429 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-110407" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:54:51.087589  181429 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-110407" cluster setting kubeconfig missing "old-k8s-version-110407" context setting]
	I1008 22:54:51.087882  181429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:51.089395  181429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 22:54:51.102317  181429 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 22:54:51.102354  181429 kubeadm.go:601] duration metric: took 28.592424ms to restartPrimaryControlPlane
	I1008 22:54:51.102364  181429 kubeadm.go:402] duration metric: took 148.191748ms to StartCluster
	I1008 22:54:51.102383  181429 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:51.102447  181429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:54:51.103131  181429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:54:51.103350  181429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:54:51.103657  181429 config.go:182] Loaded profile config "old-k8s-version-110407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1008 22:54:51.103707  181429 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:54:51.103775  181429 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-110407"
	I1008 22:54:51.103794  181429 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-110407"
	W1008 22:54:51.103890  181429 addons.go:247] addon storage-provisioner should already be in state true
	I1008 22:54:51.103915  181429 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:51.104665  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.103818  181429 addons.go:69] Setting dashboard=true in profile "old-k8s-version-110407"
	I1008 22:54:51.104950  181429 addons.go:238] Setting addon dashboard=true in "old-k8s-version-110407"
	W1008 22:54:51.104963  181429 addons.go:247] addon dashboard should already be in state true
	I1008 22:54:51.104988  181429 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:51.103828  181429 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-110407"
	I1008 22:54:51.105335  181429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-110407"
	I1008 22:54:51.105568  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.106537  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.109097  181429 out.go:179] * Verifying Kubernetes components...
	I1008 22:54:51.116189  181429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:54:51.157338  181429 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:54:51.161945  181429 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:54:51.161974  181429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:54:51.162046  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:51.183504  181429 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-110407"
	W1008 22:54:51.183528  181429 addons.go:247] addon default-storageclass should already be in state true
	I1008 22:54:51.183552  181429 host.go:66] Checking if "old-k8s-version-110407" exists ...
	I1008 22:54:51.183966  181429 cli_runner.go:164] Run: docker container inspect old-k8s-version-110407 --format={{.State.Status}}
	I1008 22:54:51.207573  181429 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 22:54:51.210527  181429 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 22:54:51.215534  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 22:54:51.215567  181429 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 22:54:51.215643  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:51.245869  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:51.255002  181429 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:54:51.255025  181429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:54:51.255091  181429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110407
	I1008 22:54:51.261788  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:51.295415  181429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33056 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/old-k8s-version-110407/id_rsa Username:docker}
	I1008 22:54:51.468204  181429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:54:51.522724  181429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:54:51.528413  181429 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-110407" to be "Ready" ...
	I1008 22:54:51.530843  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 22:54:51.530924  181429 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 22:54:51.531335  181429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:54:51.587613  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 22:54:51.587679  181429 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 22:54:51.660498  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 22:54:51.660572  181429 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 22:54:51.756344  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 22:54:51.756407  181429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 22:54:51.817220  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 22:54:51.817290  181429 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 22:54:51.842287  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 22:54:51.842350  181429 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 22:54:51.864248  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 22:54:51.864318  181429 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 22:54:51.884877  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 22:54:51.884954  181429 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 22:54:51.908058  181429 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:54:51.908139  181429 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 22:54:51.929228  181429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:54:55.241584  181429 node_ready.go:49] node "old-k8s-version-110407" is "Ready"
	I1008 22:54:55.241659  181429 node_ready.go:38] duration metric: took 3.713205436s for node "old-k8s-version-110407" to be "Ready" ...
	I1008 22:54:55.241674  181429 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:54:55.241798  181429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:54:56.815942  181429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.284557108s)
	I1008 22:54:56.816225  181429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.293468087s)
	I1008 22:54:57.290966  181429 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.049133351s)
	I1008 22:54:57.290999  181429 api_server.go:72] duration metric: took 6.187615746s to wait for apiserver process to appear ...
	I1008 22:54:57.291007  181429 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:54:57.291034  181429 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:54:57.291552  181429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.362231671s)
	I1008 22:54:57.294710  181429 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-110407 addons enable metrics-server
	
	I1008 22:54:57.297714  181429 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1008 22:54:57.301404  181429 addons.go:514] duration metric: took 6.197695068s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1008 22:54:57.302259  181429 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 22:54:57.303736  181429 api_server.go:141] control plane version: v1.28.0
	I1008 22:54:57.303763  181429 api_server.go:131] duration metric: took 12.749998ms to wait for apiserver health ...
	I1008 22:54:57.303772  181429 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:54:57.308500  181429 system_pods.go:59] 8 kube-system pods found
	I1008 22:54:57.308543  181429 system_pods.go:61] "coredns-5dd5756b68-p9wsf" [94a25734-c268-4a26-8995-467082f156ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:54:57.308552  181429 system_pods.go:61] "etcd-old-k8s-version-110407" [9341d2d7-8457-4042-953a-042454abf107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:54:57.308558  181429 system_pods.go:61] "kindnet-dzbkd" [293adcb3-a304-42a9-8533-ef23cf040ea6] Running
	I1008 22:54:57.308565  181429 system_pods.go:61] "kube-apiserver-old-k8s-version-110407" [ecdf237a-8269-4ebc-a83b-0f08d6f8157f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:54:57.308572  181429 system_pods.go:61] "kube-controller-manager-old-k8s-version-110407" [8f6a76d5-b9f0-494e-91b8-f3800acb243c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:54:57.308577  181429 system_pods.go:61] "kube-proxy-gsbl4" [cccf2800-b3c8-4684-bc54-d88b59e04bb6] Running
	I1008 22:54:57.308589  181429 system_pods.go:61] "kube-scheduler-old-k8s-version-110407" [695e473d-ed17-4a6f-ada7-b54cde1e5ddc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:54:57.308602  181429 system_pods.go:61] "storage-provisioner" [6105db1d-9197-46c6-8ae0-49fe2291d679] Running
	I1008 22:54:57.308608  181429 system_pods.go:74] duration metric: took 4.83158ms to wait for pod list to return data ...
	I1008 22:54:57.308621  181429 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:54:57.314845  181429 default_sa.go:45] found service account: "default"
	I1008 22:54:57.314875  181429 default_sa.go:55] duration metric: took 6.247154ms for default service account to be created ...
	I1008 22:54:57.314886  181429 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:54:57.319333  181429 system_pods.go:86] 8 kube-system pods found
	I1008 22:54:57.319369  181429 system_pods.go:89] "coredns-5dd5756b68-p9wsf" [94a25734-c268-4a26-8995-467082f156ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:54:57.319380  181429 system_pods.go:89] "etcd-old-k8s-version-110407" [9341d2d7-8457-4042-953a-042454abf107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:54:57.319387  181429 system_pods.go:89] "kindnet-dzbkd" [293adcb3-a304-42a9-8533-ef23cf040ea6] Running
	I1008 22:54:57.319395  181429 system_pods.go:89] "kube-apiserver-old-k8s-version-110407" [ecdf237a-8269-4ebc-a83b-0f08d6f8157f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:54:57.319405  181429 system_pods.go:89] "kube-controller-manager-old-k8s-version-110407" [8f6a76d5-b9f0-494e-91b8-f3800acb243c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:54:57.319419  181429 system_pods.go:89] "kube-proxy-gsbl4" [cccf2800-b3c8-4684-bc54-d88b59e04bb6] Running
	I1008 22:54:57.319427  181429 system_pods.go:89] "kube-scheduler-old-k8s-version-110407" [695e473d-ed17-4a6f-ada7-b54cde1e5ddc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:54:57.319437  181429 system_pods.go:89] "storage-provisioner" [6105db1d-9197-46c6-8ae0-49fe2291d679] Running
	I1008 22:54:57.319444  181429 system_pods.go:126] duration metric: took 4.552841ms to wait for k8s-apps to be running ...
	I1008 22:54:57.319454  181429 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:54:57.319508  181429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:54:57.334425  181429 system_svc.go:56] duration metric: took 14.961938ms WaitForService to wait for kubelet
	I1008 22:54:57.334456  181429 kubeadm.go:586] duration metric: took 6.231071448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:54:57.334474  181429 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:54:57.337035  181429 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:54:57.337070  181429 node_conditions.go:123] node cpu capacity is 2
	I1008 22:54:57.337082  181429 node_conditions.go:105] duration metric: took 2.60291ms to run NodePressure ...
	I1008 22:54:57.337095  181429 start.go:241] waiting for startup goroutines ...
	I1008 22:54:57.337103  181429 start.go:246] waiting for cluster config update ...
	I1008 22:54:57.337114  181429 start.go:255] writing updated cluster config ...
	I1008 22:54:57.337398  181429 ssh_runner.go:195] Run: rm -f paused
	I1008 22:54:57.341370  181429 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:54:57.346941  181429 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-p9wsf" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 22:54:59.353849  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:01.853023  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:04.352864  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:06.852953  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:08.854456  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:11.352636  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:13.354075  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:15.854455  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:18.353001  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:20.353334  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:22.855536  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	W1008 22:55:25.352938  181429 pod_ready.go:104] pod "coredns-5dd5756b68-p9wsf" is not "Ready", error: <nil>
	I1008 22:55:26.853605  181429 pod_ready.go:94] pod "coredns-5dd5756b68-p9wsf" is "Ready"
	I1008 22:55:26.853671  181429 pod_ready.go:86] duration metric: took 29.506703759s for pod "coredns-5dd5756b68-p9wsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.856697  181429 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.861541  181429 pod_ready.go:94] pod "etcd-old-k8s-version-110407" is "Ready"
	I1008 22:55:26.861571  181429 pod_ready.go:86] duration metric: took 4.848401ms for pod "etcd-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.864941  181429 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.870530  181429 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-110407" is "Ready"
	I1008 22:55:26.870560  181429 pod_ready.go:86] duration metric: took 5.590924ms for pod "kube-apiserver-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:26.873687  181429 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.050298  181429 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-110407" is "Ready"
	I1008 22:55:27.050330  181429 pod_ready.go:86] duration metric: took 176.61453ms for pod "kube-controller-manager-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.250955  181429 pod_ready.go:83] waiting for pod "kube-proxy-gsbl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.650544  181429 pod_ready.go:94] pod "kube-proxy-gsbl4" is "Ready"
	I1008 22:55:27.650572  181429 pod_ready.go:86] duration metric: took 399.591208ms for pod "kube-proxy-gsbl4" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:27.852083  181429 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:28.250749  181429 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-110407" is "Ready"
	I1008 22:55:28.250778  181429 pod_ready.go:86] duration metric: took 398.65724ms for pod "kube-scheduler-old-k8s-version-110407" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:55:28.250791  181429 pod_ready.go:40] duration metric: took 30.909384228s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:55:28.309380  181429 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1008 22:55:28.312305  181429 out.go:203] 
	W1008 22:55:28.315373  181429 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1008 22:55:28.318253  181429 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1008 22:55:28.321132  181429 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-110407" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.483481147Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58d419bc-2db2-4d97-b8e2-337ad5633f44 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.486614681Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a5eca641-b41a-4e76-bb41-c17cb617aa43 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.491333833Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper" id=c24cd51f-ee84-4784-a0d1-cea5f736a40e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.491604844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.499193683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.499880624Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.531633889Z" level=info msg="Created container c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper" id=c24cd51f-ee84-4784-a0d1-cea5f736a40e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.534069585Z" level=info msg="Starting container: c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f" id=87160fbb-1801-4c32-b31c-fd26d04c3278 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:55:28 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:28.535958887Z" level=info msg="Started container" PID=1639 containerID=c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper id=87160fbb-1801-4c32-b31c-fd26d04c3278 name=/runtime.v1.RuntimeService/StartContainer sandboxID=77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f
	Oct 08 22:55:28 old-k8s-version-110407 conmon[1636]: conmon c246c8270cd890985c9f <ninfo>: container 1639 exited with status 1
	Oct 08 22:55:29 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:29.531931248Z" level=info msg="Removing container: 68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881" id=ad948224-dd7a-431d-8252-6a7e0f3e874b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:55:29 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:29.541966647Z" level=info msg="Error loading conmon cgroup of container 68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881: cgroup deleted" id=ad948224-dd7a-431d-8252-6a7e0f3e874b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:55:29 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:29.547081891Z" level=info msg="Removed container 68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6/dashboard-metrics-scraper" id=ad948224-dd7a-431d-8252-6a7e0f3e874b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.207945809Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.212229781Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.212411642Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.212450871Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.215747837Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.215780814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.215803961Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.21900815Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.219044253Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.219071051Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.222612343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:55:36 old-k8s-version-110407 crio[651]: time="2025-10-08T22:55:36.222654477Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c246c8270cd89       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   77e405280b1e5       dashboard-metrics-scraper-5f989dc9cf-9nlr6       kubernetes-dashboard
	3edfc0b306932       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   db92f45385f10       storage-provisioner                              kube-system
	a87a33253a636       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   a486ba6d80c07       kubernetes-dashboard-8694d4445c-wfmhw            kubernetes-dashboard
	9af0417f322b6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   31cdd7240bbbe       busybox                                          default
	d72eb628fb497       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   02e73c19aab54       coredns-5dd5756b68-p9wsf                         kube-system
	dd58f37f74850       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   b9bc32a8584d8       kindnet-dzbkd                                    kube-system
	0e580a0fb08ba       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   2987aeced0dc5       kube-proxy-gsbl4                                 kube-system
	1089bc6ec608e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   db92f45385f10       storage-provisioner                              kube-system
	31d5d12b33358       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   7ba2aca9cbd2e       kube-apiserver-old-k8s-version-110407            kube-system
	aff39630382e0       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   78d95e6215fc3       kube-controller-manager-old-k8s-version-110407   kube-system
	e0004b069fee4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   837d7d22aa22c       etcd-old-k8s-version-110407                      kube-system
	0434b78e1b9c7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   64eeda4998dfa       kube-scheduler-old-k8s-version-110407            kube-system
	
	
	==> coredns [d72eb628fb497376f9eefcba7d2f6f36dfe625924dc6e8dd4130842c7d32eee3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55707 - 61582 "HINFO IN 6697083693980746306.7114089042911625345. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.075436956s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-110407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-110407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=old-k8s-version-110407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_53_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:53:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-110407
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:55:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:55:26 +0000   Wed, 08 Oct 2025 22:54:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-110407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2f6f93ebbc24529ad1c0a658632a5da
	  System UUID:                8dba2821-4735-44a2-98ca-98cb78fcdea2
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-p9wsf                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-old-k8s-version-110407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-dzbkd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-110407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-110407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-gsbl4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-110407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-9nlr6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-wfmhw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-110407 event: Registered Node old-k8s-version-110407 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-110407 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node old-k8s-version-110407 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node old-k8s-version-110407 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-110407 event: Registered Node old-k8s-version-110407 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:22] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:27] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e0004b069fee46142e9b07ac07faf8907f947de19c729c522213256f72792263] <==
	{"level":"info","ts":"2025-10-08T22:54:51.083546Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-08T22:54:51.083621Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-08T22:54:51.084056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-08T22:54:51.093265Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-08T22:54:51.093386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:54:51.093424Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-08T22:54:51.095309Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-08T22:54:51.095526Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-08T22:54:51.095558Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-08T22:54:51.09563Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-08T22:54:51.095643Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-08T22:54:52.493662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-08T22:54:52.493714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-08T22:54:52.493749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-08T22:54:52.493764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.493771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.49378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.493788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-08T22:54:52.499733Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-110407 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-08T22:54:52.499784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T22:54:52.499972Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-08T22:54:52.500054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-08T22:54:52.501251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-08T22:54:52.499803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-08T22:54:52.505815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:55:46 up  1:38,  0 user,  load average: 1.17, 1.31, 1.65
	Linux old-k8s-version-110407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd58f37f74850810388d93ef413e3efb1a36fce33e2dc09297330f27a8cbf5c1] <==
	I1008 22:54:55.916361       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:54:56.002436       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 22:54:56.002702       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:54:56.002752       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:54:56.003052       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:54:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:54:56.203830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:54:56.203847       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:54:56.203855       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:54:56.204136       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:55:26.203954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:55:26.203954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:55:26.204065       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 22:55:26.204936       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 22:55:27.504093       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:55:27.504192       1 metrics.go:72] Registering metrics
	I1008 22:55:27.504291       1 controller.go:711] "Syncing nftables rules"
	I1008 22:55:36.207646       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:55:36.207684       1 main.go:301] handling current node
	I1008 22:55:46.209703       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:55:46.209732       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31d5d12b3335847a7a1c8dd5ff7e9ed344177e872405a18bffd7fef7d424e626] <==
	I1008 22:54:55.300044       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1008 22:54:55.317777       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1008 22:54:55.318005       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1008 22:54:55.318023       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1008 22:54:55.319371       1 aggregator.go:166] initial CRD sync complete...
	I1008 22:54:55.319399       1 autoregister_controller.go:141] Starting autoregister controller
	I1008 22:54:55.319405       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 22:54:55.320503       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1008 22:54:55.322562       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:54:55.366479       1 shared_informer.go:318] Caches are synced for configmaps
	I1008 22:54:55.376935       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 22:54:55.390931       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1008 22:54:55.425560       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 22:54:55.431420       1 cache.go:39] Caches are synced for autoregister controller
	I1008 22:54:56.007772       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:54:57.090044       1 controller.go:624] quota admission added evaluator for: namespaces
	I1008 22:54:57.140978       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1008 22:54:57.168982       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:54:57.182058       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:54:57.191969       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1008 22:54:57.259563       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.76.243"}
	I1008 22:54:57.283386       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.119.174"}
	I1008 22:55:07.977184       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:55:08.026775       1 controller.go:624] quota admission added evaluator for: endpoints
	I1008 22:55:08.130464       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [aff39630382e0b657df55be14c63dfb5df04e731f6be4ae06c64640cbeb9f074] <==
	I1008 22:55:07.934231       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.433µs"
	I1008 22:55:08.137529       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1008 22:55:08.143965       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1008 22:55:08.160966       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-wfmhw"
	I1008 22:55:08.160993       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-9nlr6"
	I1008 22:55:08.161908       1 shared_informer.go:318] Caches are synced for garbage collector
	I1008 22:55:08.165415       1 shared_informer.go:318] Caches are synced for garbage collector
	I1008 22:55:08.165503       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1008 22:55:08.169799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.560823ms"
	I1008 22:55:08.181445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.623728ms"
	I1008 22:55:08.191801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.492714ms"
	I1008 22:55:08.191914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.407µs"
	I1008 22:55:08.199732       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="18.23309ms"
	I1008 22:55:08.199824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.391µs"
	I1008 22:55:08.203528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.359µs"
	I1008 22:55:08.218735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.023µs"
	I1008 22:55:13.543806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.263866ms"
	I1008 22:55:13.543927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.15µs"
	I1008 22:55:17.510077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.139µs"
	I1008 22:55:18.522842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.989µs"
	I1008 22:55:19.522110       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.367µs"
	I1008 22:55:26.577926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.581079ms"
	I1008 22:55:26.578183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.307µs"
	I1008 22:55:29.550062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.857µs"
	I1008 22:55:38.498852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.284µs"
	
	
	==> kube-proxy [0e580a0fb08ba90a90f58f3c01972f80ab2064c7b7f180e3447dce96336f16c7] <==
	I1008 22:54:56.018867       1 server_others.go:69] "Using iptables proxy"
	I1008 22:54:56.056121       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1008 22:54:56.243032       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:54:56.254091       1 server_others.go:152] "Using iptables Proxier"
	I1008 22:54:56.254130       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1008 22:54:56.254139       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1008 22:54:56.254170       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1008 22:54:56.254397       1 server.go:846] "Version info" version="v1.28.0"
	I1008 22:54:56.254408       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:54:56.255610       1 config.go:188] "Starting service config controller"
	I1008 22:54:56.255622       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1008 22:54:56.255638       1 config.go:97] "Starting endpoint slice config controller"
	I1008 22:54:56.255642       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1008 22:54:56.255992       1 config.go:315] "Starting node config controller"
	I1008 22:54:56.255998       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1008 22:54:56.356041       1 shared_informer.go:318] Caches are synced for node config
	I1008 22:54:56.356082       1 shared_informer.go:318] Caches are synced for service config
	I1008 22:54:56.356113       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0434b78e1b9c7b8bd208c0bd06784b6ae445fc7d2cf410fea035aea751050584] <==
	I1008 22:54:53.718786       1 serving.go:348] Generated self-signed cert in-memory
	W1008 22:54:55.206068       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 22:54:55.206179       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 22:54:55.206212       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 22:54:55.206253       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 22:54:55.303843       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1008 22:54:55.303941       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:54:55.309717       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:54:55.309780       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1008 22:54:55.309976       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1008 22:54:55.310090       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1008 22:54:55.411334       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: I1008 22:55:08.307247     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c8d79e86-94d5-4b6b-ba36-3245de9e0ae5-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-9nlr6\" (UID: \"c8d79e86-94d5-4b6b-ba36-3245de9e0ae5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6"
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: I1008 22:55:08.307282     774 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496rj\" (UniqueName: \"kubernetes.io/projected/c8d79e86-94d5-4b6b-ba36-3245de9e0ae5-kube-api-access-496rj\") pod \"dashboard-metrics-scraper-5f989dc9cf-9nlr6\" (UID: \"c8d79e86-94d5-4b6b-ba36-3245de9e0ae5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6"
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: W1008 22:55:08.526431     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-a486ba6d80c078f99c70dd996e78f2fa97390883c0d087567fd3e03ec7eb6413 WatchSource:0}: Error finding container a486ba6d80c078f99c70dd996e78f2fa97390883c0d087567fd3e03ec7eb6413: Status 404 returned error can't find the container with id a486ba6d80c078f99c70dd996e78f2fa97390883c0d087567fd3e03ec7eb6413
	Oct 08 22:55:08 old-k8s-version-110407 kubelet[774]: W1008 22:55:08.532020     774 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/164acd06879a63aca8fb5b1e9a5f63ba000536834de346c3a5ae9f7d3e567c04/crio-77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f WatchSource:0}: Error finding container 77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f: Status 404 returned error can't find the container with id 77e405280b1e53f0e6f90e2cd7ac1d29e6ab8d2bc24779386f355a4a9567aa3f
	Oct 08 22:55:17 old-k8s-version-110407 kubelet[774]: I1008 22:55:17.492881     774 scope.go:117] "RemoveContainer" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:17 old-k8s-version-110407 kubelet[774]: I1008 22:55:17.508418     774 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-wfmhw" podStartSLOduration=5.04770703 podCreationTimestamp="2025-10-08 22:55:08 +0000 UTC" firstStartedPulling="2025-10-08 22:55:08.530941212 +0000 UTC m=+18.384849969" lastFinishedPulling="2025-10-08 22:55:12.991589415 +0000 UTC m=+22.845498205" observedRunningTime="2025-10-08 22:55:13.502163216 +0000 UTC m=+23.356072055" watchObservedRunningTime="2025-10-08 22:55:17.508355266 +0000 UTC m=+27.362264023"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.497700     774 scope.go:117] "RemoveContainer" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.498257     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: E1008 22:55:18.498613     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.517440     774 scope.go:117] "RemoveContainer" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: E1008 22:55:18.518116     774 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046\": container with ID starting with 532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046 not found: ID does not exist" containerID="532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"
	Oct 08 22:55:18 old-k8s-version-110407 kubelet[774]: I1008 22:55:18.518349     774 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046"} err="failed to get container status \"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046\": rpc error: code = NotFound desc = could not find container \"532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046\": container with ID starting with 532d2bb451299f9728094e7957d5cd4686e8c2448e48ef8280a8dac8238ed046 not found: ID does not exist"
	Oct 08 22:55:19 old-k8s-version-110407 kubelet[774]: I1008 22:55:19.501239     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:19 old-k8s-version-110407 kubelet[774]: E1008 22:55:19.502083     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:26 old-k8s-version-110407 kubelet[774]: I1008 22:55:26.518371     774 scope.go:117] "RemoveContainer" containerID="1089bc6ec608e4f6ff237f1aa25f35c60b338495b151b22c2b52a17146b6be9c"
	Oct 08 22:55:28 old-k8s-version-110407 kubelet[774]: I1008 22:55:28.482471     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:29 old-k8s-version-110407 kubelet[774]: I1008 22:55:29.528640     774 scope.go:117] "RemoveContainer" containerID="68ee702cf6a848582d8602e2eae7c2a6a9044d2cd785c6a5c1cf1ca1ce6ed881"
	Oct 08 22:55:29 old-k8s-version-110407 kubelet[774]: I1008 22:55:29.528925     774 scope.go:117] "RemoveContainer" containerID="c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f"
	Oct 08 22:55:29 old-k8s-version-110407 kubelet[774]: E1008 22:55:29.529237     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:38 old-k8s-version-110407 kubelet[774]: I1008 22:55:38.482564     774 scope.go:117] "RemoveContainer" containerID="c246c8270cd890985c9f44a2ab9bd30031695ef36a03b698a11ad392925f741f"
	Oct 08 22:55:38 old-k8s-version-110407 kubelet[774]: E1008 22:55:38.483357     774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-9nlr6_kubernetes-dashboard(c8d79e86-94d5-4b6b-ba36-3245de9e0ae5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-9nlr6" podUID="c8d79e86-94d5-4b6b-ba36-3245de9e0ae5"
	Oct 08 22:55:41 old-k8s-version-110407 kubelet[774]: I1008 22:55:41.521099     774 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 08 22:55:41 old-k8s-version-110407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 22:55:41 old-k8s-version-110407 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 22:55:41 old-k8s-version-110407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a87a33253a6361e87d6c423aff7d47025b3d27557d3d58981f75a36ab84eb3a8] <==
	2025/10/08 22:55:13 Using namespace: kubernetes-dashboard
	2025/10/08 22:55:13 Using in-cluster config to connect to apiserver
	2025/10/08 22:55:13 Using secret token for csrf signing
	2025/10/08 22:55:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 22:55:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 22:55:13 Successful initial request to the apiserver, version: v1.28.0
	2025/10/08 22:55:13 Generating JWE encryption key
	2025/10/08 22:55:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 22:55:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 22:55:15 Initializing JWE encryption key from synchronized object
	2025/10/08 22:55:15 Creating in-cluster Sidecar client
	2025/10/08 22:55:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 22:55:15 Serving insecurely on HTTP port: 9090
	2025/10/08 22:55:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 22:55:13 Starting overwatch
	
	
	==> storage-provisioner [1089bc6ec608e4f6ff237f1aa25f35c60b338495b151b22c2b52a17146b6be9c] <==
	I1008 22:54:55.926000       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 22:55:25.928401       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [3edfc0b30693211c28865d5219a9146586f475e536683705e62e7fee3cbd1d18] <==
	I1008 22:55:26.575707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:55:26.594695       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:55:26.594839       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 22:55:43.994967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:55:43.995124       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-110407_b9ed06f4-aaf9-4de2-ac97-3f9149e8a08a!
	I1008 22:55:43.996044       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f2116a6-5967-4c8b-a3c3-8076bb9f79ff", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-110407_b9ed06f4-aaf9-4de2-ac97-3f9149e8a08a became leader
	I1008 22:55:44.095318       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-110407_b9ed06f4-aaf9-4de2-ac97-3f9149e8a08a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110407 -n old-k8s-version-110407
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110407 -n old-k8s-version-110407: exit status 2 (378.07256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-110407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (388.409327ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:57:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-939665 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-939665 describe deploy/metrics-server -n kube-system: exit status 1 (82.038422ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-939665 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-939665
helpers_test.go:243: (dbg) docker inspect no-preload-939665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4",
	        "Created": "2025-10-08T22:55:51.376878504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:55:51.450601776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/hostname",
	        "HostsPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/hosts",
	        "LogPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4-json.log",
	        "Name": "/no-preload-939665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-939665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-939665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4",
	                "LowerDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-939665",
	                "Source": "/var/lib/docker/volumes/no-preload-939665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-939665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-939665",
	                "name.minikube.sigs.k8s.io": "no-preload-939665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "59f5c2debff1b48874eb645cb44d0d733c73b7bb4f914c7329d1d66c7f2ce859",
	            "SandboxKey": "/var/run/docker/netns/59f5c2debff1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-939665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:f2:0f:b3:01:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc660108ce7e425dc8ccc8b9b4c79d2e7285488dbd4605c4f5b483d992fc9478",
	                    "EndpointID": "4613c20a5badf1cd59509d9e9f549d64b419489003bd0aaab081ac62267ccada",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-939665",
	                        "28f143a4ef4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-939665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-939665 logs -n 25: (1.20325459s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-840929 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ ssh     │ -p cilium-840929 sudo crio config                                                                                                                                                                                                             │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │                     │
	│ delete  │ -p cilium-840929                                                                                                                                                                                                                              │ cilium-840929             │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:45 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:46 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ delete  │ -p cert-expiration-292528                                                                                                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	│ delete  │ -p force-systemd-env-092546                                                                                                                                                                                                                   │ force-systemd-env-092546  │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:52 UTC │
	│ start   │ -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ cert-options-378019 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ -p cert-options-378019 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:55:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:55:50.219073  185103 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:55:50.219290  185103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:55:50.219318  185103 out.go:374] Setting ErrFile to fd 2...
	I1008 22:55:50.219338  185103 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:55:50.220125  185103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:55:50.220566  185103 out.go:368] Setting JSON to false
	I1008 22:55:50.221387  185103 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5901,"bootTime":1759958250,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:55:50.221483  185103 start.go:141] virtualization:  
	I1008 22:55:50.225691  185103 out.go:179] * [no-preload-939665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:55:50.230272  185103 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:55:50.230467  185103 notify.go:220] Checking for updates...
	I1008 22:55:50.236956  185103 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:55:50.240112  185103 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:55:50.243241  185103 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:55:50.246268  185103 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:55:50.249359  185103 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:55:50.253012  185103 config.go:182] Loaded profile config "force-systemd-flag-385382": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:55:50.253168  185103 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:55:50.278296  185103 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:55:50.278450  185103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:55:50.345263  185103 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:55:50.336228422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:55:50.345375  185103 docker.go:318] overlay module found
	I1008 22:55:50.348556  185103 out.go:179] * Using the docker driver based on user configuration
	I1008 22:55:50.351430  185103 start.go:305] selected driver: docker
	I1008 22:55:50.351450  185103 start.go:925] validating driver "docker" against <nil>
	I1008 22:55:50.351472  185103 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:55:50.352201  185103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:55:50.413580  185103 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:55:50.404954713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:55:50.413769  185103 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:55:50.414007  185103 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:55:50.417005  185103 out.go:179] * Using Docker driver with root privileges
	I1008 22:55:50.419840  185103 cni.go:84] Creating CNI manager for ""
	I1008 22:55:50.419916  185103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:55:50.419925  185103 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:55:50.420000  185103 start.go:349] cluster config:
	{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:55:50.425039  185103 out.go:179] * Starting "no-preload-939665" primary control-plane node in "no-preload-939665" cluster
	I1008 22:55:50.427919  185103 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:55:50.430734  185103 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:55:50.433486  185103 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:55:50.433572  185103 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:55:50.433616  185103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:55:50.433695  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json: {Name:mk145f2f0d5ade800740f0a334950fdbb7de3c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:55:50.433967  185103 cache.go:107] acquiring lock: {Name:mk344f5adac59ef32f6d69c009b0f8ec87052611 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.434064  185103 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1008 22:55:50.434104  185103 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.314µs
	I1008 22:55:50.434131  185103 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1008 22:55:50.434159  185103 cache.go:107] acquiring lock: {Name:mk2a1f78f7d6511aea6d634a58ed1c88718aab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.434277  185103 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:50.434639  185103 cache.go:107] acquiring lock: {Name:mk7141aa7b89df55e8dad25221487d909ba46017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.434788  185103 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:50.435025  185103 cache.go:107] acquiring lock: {Name:mk49b6b290192d16491277897c30c50e3badc30b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.435169  185103 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:50.435457  185103 cache.go:107] acquiring lock: {Name:mka3f9c49147e0e292b0cfd3d6255817b177ac9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.435583  185103 cache.go:107] acquiring lock: {Name:mk85b30d8a79adbfa59b06c1c836919be1606fc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.435886  185103 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1008 22:55:50.436129  185103 cache.go:107] acquiring lock: {Name:mka1ae807285591bb895528e804cb6d37d5af28f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.436223  185103 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:50.436449  185103 cache.go:107] acquiring lock: {Name:mk61bfc3bad4ca73036eaa8d93cb87fd5c241083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.436546  185103 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:50.438011  185103 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:50.438480  185103 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:50.439553  185103 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:50.441738  185103 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:50.442164  185103 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1008 22:55:50.443319  185103 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:50.443910  185103 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:50.444758  185103 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:50.457340  185103 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:55:50.457405  185103 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:55:50.457457  185103 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:55:50.457498  185103 start.go:360] acquireMachinesLock for no-preload-939665: {Name:mk60e1980ef0e273f848717956362180f47a8fab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:55:50.457709  185103 start.go:364] duration metric: took 178.185µs to acquireMachinesLock for "no-preload-939665"
	I1008 22:55:50.457780  185103 start.go:93] Provisioning new machine with config: &{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:55:50.457885  185103 start.go:125] createHost starting for "" (driver="docker")
	I1008 22:55:50.461485  185103 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:55:50.461821  185103 start.go:159] libmachine.API.Create for "no-preload-939665" (driver="docker")
	I1008 22:55:50.461884  185103 client.go:168] LocalClient.Create starting
	I1008 22:55:50.462000  185103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:55:50.462049  185103 main.go:141] libmachine: Decoding PEM data...
	I1008 22:55:50.462083  185103 main.go:141] libmachine: Parsing certificate...
	I1008 22:55:50.462170  185103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:55:50.462211  185103 main.go:141] libmachine: Decoding PEM data...
	I1008 22:55:50.462246  185103 main.go:141] libmachine: Parsing certificate...
	I1008 22:55:50.462643  185103 cli_runner.go:164] Run: docker network inspect no-preload-939665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:55:50.487780  185103 cli_runner.go:211] docker network inspect no-preload-939665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:55:50.487861  185103 network_create.go:284] running [docker network inspect no-preload-939665] to gather additional debugging logs...
	I1008 22:55:50.487883  185103 cli_runner.go:164] Run: docker network inspect no-preload-939665
	W1008 22:55:50.503868  185103 cli_runner.go:211] docker network inspect no-preload-939665 returned with exit code 1
	I1008 22:55:50.503911  185103 network_create.go:287] error running [docker network inspect no-preload-939665]: docker network inspect no-preload-939665: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-939665 not found
	I1008 22:55:50.503926  185103 network_create.go:289] output of [docker network inspect no-preload-939665]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-939665 not found
	
	** /stderr **
	I1008 22:55:50.504015  185103 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:55:50.520732  185103 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:55:50.521078  185103 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:55:50.521362  185103 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:55:50.521596  185103 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-94ec01d43e41 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:6d:06:9b:60:31} reservation:<nil>}
	I1008 22:55:50.522056  185103 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001be1de0}
	I1008 22:55:50.522081  185103 network_create.go:124] attempt to create docker network no-preload-939665 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 22:55:50.522146  185103 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-939665 no-preload-939665
	I1008 22:55:50.608828  185103 network_create.go:108] docker network no-preload-939665 192.168.85.0/24 created
	I1008 22:55:50.608867  185103 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-939665" container
	I1008 22:55:50.608993  185103 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:55:50.626101  185103 cli_runner.go:164] Run: docker volume create no-preload-939665 --label name.minikube.sigs.k8s.io=no-preload-939665 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:55:50.648830  185103 oci.go:103] Successfully created a docker volume no-preload-939665
	I1008 22:55:50.648912  185103 cli_runner.go:164] Run: docker run --rm --name no-preload-939665-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-939665 --entrypoint /usr/bin/test -v no-preload-939665:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:55:50.763864  185103 cache.go:162] opening:  /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1008 22:55:50.777711  185103 cache.go:162] opening:  /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1008 22:55:50.790804  185103 cache.go:162] opening:  /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1008 22:55:50.802674  185103 cache.go:162] opening:  /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1008 22:55:50.808966  185103 cache.go:162] opening:  /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1008 22:55:50.812349  185103 cache.go:162] opening:  /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1008 22:55:50.814313  185103 cache.go:162] opening:  /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1008 22:55:50.853687  185103 cache.go:157] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1008 22:55:50.853712  185103 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 418.155456ms
	I1008 22:55:50.853723  185103 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1008 22:55:51.210838  185103 cache.go:157] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1008 22:55:51.210864  185103 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 775.411839ms
	I1008 22:55:51.210875  185103 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1008 22:55:51.294978  185103 oci.go:107] Successfully prepared a docker volume no-preload-939665
	I1008 22:55:51.295010  185103 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1008 22:55:51.295143  185103 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:55:51.295264  185103 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:55:51.361201  185103 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-939665 --name no-preload-939665 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-939665 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-939665 --network no-preload-939665 --ip 192.168.85.2 --volume no-preload-939665:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:55:51.742235  185103 cache.go:157] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1008 22:55:51.742261  185103 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.307628045s
	I1008 22:55:51.742274  185103 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1008 22:55:51.758991  185103 cache.go:157] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1008 22:55:51.759022  185103 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.322575995s
	I1008 22:55:51.759035  185103 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1008 22:55:51.762883  185103 cache.go:157] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1008 22:55:51.762910  185103 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.327887777s
	I1008 22:55:51.762922  185103 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1008 22:55:51.817826  185103 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Running}}
	I1008 22:55:51.841044  185103 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:55:51.863584  185103 cli_runner.go:164] Run: docker exec no-preload-939665 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:55:51.898009  185103 cache.go:157] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1008 22:55:51.898085  185103 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.463920461s
	I1008 22:55:51.898113  185103 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1008 22:55:51.943491  185103 oci.go:144] the created container "no-preload-939665" has a running status.
	I1008 22:55:51.943558  185103 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa...
	I1008 22:55:52.550825  185103 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:55:52.601885  185103 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:55:52.657762  185103 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:55:52.657783  185103 kic_runner.go:114] Args: [docker exec --privileged no-preload-939665 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:55:52.746185  185103 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:55:52.769567  185103 machine.go:93] provisionDockerMachine start ...
	I1008 22:55:52.769707  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:52.811492  185103 main.go:141] libmachine: Using SSH client type: native
	I1008 22:55:52.811824  185103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1008 22:55:52.811835  185103 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:55:52.980668  185103 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:55:52.980711  185103 ubuntu.go:182] provisioning hostname "no-preload-939665"
	I1008 22:55:52.980777  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:52.999940  185103 main.go:141] libmachine: Using SSH client type: native
	I1008 22:55:53.000250  185103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1008 22:55:53.000266  185103 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-939665 && echo "no-preload-939665" | sudo tee /etc/hostname
	I1008 22:55:53.061368  185103 cache.go:157] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1008 22:55:53.061396  185103 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.625271215s
	I1008 22:55:53.061408  185103 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1008 22:55:53.061419  185103 cache.go:87] Successfully saved all images to host disk.
	I1008 22:55:53.189248  185103 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:55:53.189409  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:53.212957  185103 main.go:141] libmachine: Using SSH client type: native
	I1008 22:55:53.213268  185103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1008 22:55:53.213290  185103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-939665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-939665/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-939665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:55:53.362147  185103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:55:53.362177  185103 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:55:53.362204  185103 ubuntu.go:190] setting up certificates
	I1008 22:55:53.362214  185103 provision.go:84] configureAuth start
	I1008 22:55:53.362277  185103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:55:53.384210  185103 provision.go:143] copyHostCerts
	I1008 22:55:53.384280  185103 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:55:53.384295  185103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:55:53.384359  185103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:55:53.384455  185103 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:55:53.384464  185103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:55:53.384491  185103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:55:53.384553  185103 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:55:53.384562  185103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:55:53.384588  185103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:55:53.384646  185103 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.no-preload-939665 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-939665]
	I1008 22:55:53.761874  185103 provision.go:177] copyRemoteCerts
	I1008 22:55:53.761944  185103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:55:53.761996  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:53.779510  185103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:55:53.885422  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:55:53.903865  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 22:55:53.921349  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 22:55:53.939756  185103 provision.go:87] duration metric: took 577.509396ms to configureAuth
	I1008 22:55:53.939829  185103 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:55:53.940037  185103 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:55:53.940149  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:53.957227  185103 main.go:141] libmachine: Using SSH client type: native
	I1008 22:55:53.957551  185103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33061 <nil> <nil>}
	I1008 22:55:53.957573  185103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:55:54.256909  185103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:55:54.256933  185103 machine.go:96] duration metric: took 1.487346195s to provisionDockerMachine
	I1008 22:55:54.256943  185103 client.go:171] duration metric: took 3.795032121s to LocalClient.Create
	I1008 22:55:54.256957  185103 start.go:167] duration metric: took 3.795137993s to libmachine.API.Create "no-preload-939665"
	I1008 22:55:54.256965  185103 start.go:293] postStartSetup for "no-preload-939665" (driver="docker")
	I1008 22:55:54.256975  185103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:55:54.257091  185103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:55:54.257140  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:54.275654  185103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:55:54.377584  185103 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:55:54.380863  185103 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:55:54.380899  185103 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:55:54.380911  185103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:55:54.380968  185103 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:55:54.381055  185103 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:55:54.381173  185103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:55:54.388689  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:55:54.407196  185103 start.go:296] duration metric: took 150.217809ms for postStartSetup
	I1008 22:55:54.407591  185103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:55:54.424797  185103 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:55:54.425110  185103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:55:54.425178  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:54.443363  185103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:55:54.542563  185103 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:55:54.547191  185103 start.go:128] duration metric: took 4.089276151s to createHost
	I1008 22:55:54.547218  185103 start.go:83] releasing machines lock for "no-preload-939665", held for 4.089459816s
	I1008 22:55:54.547295  185103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:55:54.564318  185103 ssh_runner.go:195] Run: cat /version.json
	I1008 22:55:54.564382  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:54.564693  185103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:55:54.564755  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:55:54.589868  185103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:55:54.601113  185103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:55:54.698377  185103 ssh_runner.go:195] Run: systemctl --version
	I1008 22:55:54.805795  185103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:55:54.848602  185103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:55:54.853275  185103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:55:54.853346  185103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:55:54.883548  185103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:55:54.883572  185103 start.go:495] detecting cgroup driver to use...
	I1008 22:55:54.883604  185103 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:55:54.883652  185103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:55:54.901501  185103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:55:54.913910  185103 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:55:54.913984  185103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:55:54.931591  185103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:55:54.949895  185103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:55:55.074760  185103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:55:55.200999  185103 docker.go:234] disabling docker service ...
	I1008 22:55:55.201086  185103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:55:55.222591  185103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:55:55.237373  185103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:55:55.347794  185103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:55:55.454798  185103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:55:55.467522  185103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:55:55.485754  185103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:55:55.485817  185103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:55:55.494391  185103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:55:55.494456  185103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:55:55.503385  185103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:55:55.512119  185103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:55:55.520677  185103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:55:55.528830  185103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:55:55.537553  185103 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:55:55.551601  185103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:55:55.560542  185103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:55:55.568159  185103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:55:55.576051  185103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:55:55.683757  185103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:55:55.803258  185103 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:55:55.803377  185103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:55:55.807276  185103 start.go:563] Will wait 60s for crictl version
	I1008 22:55:55.807386  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:55.811092  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:55:55.834011  185103 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:55:55.834108  185103 ssh_runner.go:195] Run: crio --version
	I1008 22:55:55.863292  185103 ssh_runner.go:195] Run: crio --version
	I1008 22:55:55.896463  185103 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:55:55.899426  185103 cli_runner.go:164] Run: docker network inspect no-preload-939665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:55:55.915712  185103 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:55:55.919891  185103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:55:55.929548  185103 kubeadm.go:883] updating cluster {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:55:55.929696  185103 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:55:55.929744  185103 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:55:55.953749  185103 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1008 22:55:55.953773  185103 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 22:55:55.953820  185103 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:55:55.953849  185103 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:55.954277  185103 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1008 22:55:55.954507  185103 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:55.954589  185103 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:55.954929  185103 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:55.954996  185103 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:55.955888  185103 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:55.961858  185103 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:55.961992  185103 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:55.962150  185103 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:55.962278  185103 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1008 22:55:55.962420  185103 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:55.962489  185103 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:55.962561  185103 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:55.961858  185103 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:55:56.184535  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1008 22:55:56.205880  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:56.207391  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:56.209420  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:56.209670  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:56.226531  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:56.249219  185103 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1008 22:55:56.249257  185103 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1008 22:55:56.249302  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:56.267085  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:56.313979  185103 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1008 22:55:56.314021  185103 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:56.314072  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:56.314150  185103 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1008 22:55:56.314169  185103 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:56.314190  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:56.349224  185103 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1008 22:55:56.349265  185103 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:56.349312  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:56.349351  185103 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1008 22:55:56.349400  185103 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1008 22:55:56.349419  185103 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:56.349444  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:56.349423  185103 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:56.349504  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1008 22:55:56.349527  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:56.363545  185103 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1008 22:55:56.363634  185103 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:56.363718  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:55:56.363845  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:56.363937  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:56.387056  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:56.387123  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:56.387170  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:56.387229  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1008 22:55:56.417121  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:56.417188  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:56.417244  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:56.511139  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1008 22:55:56.511214  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:56.511275  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:56.511171  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:56.530668  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1008 22:55:56.530748  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1008 22:55:56.530816  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:56.613919  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1008 22:55:56.613998  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1008 22:55:56.614058  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1008 22:55:56.614131  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1008 22:55:56.614189  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1008 22:55:56.640794  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1008 22:55:56.640962  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1008 22:55:56.641071  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1008 22:55:56.641150  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1008 22:55:56.641273  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1008 22:55:56.689911  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1008 22:55:56.689927  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1008 22:55:56.690017  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1008 22:55:56.690051  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1008 22:55:56.690110  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1008 22:55:56.690122  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1008 22:55:56.690189  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1008 22:55:56.690224  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1008 22:55:56.690258  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1008 22:55:56.690288  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1008 22:55:56.690321  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1008 22:55:56.690335  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1008 22:55:56.690336  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1008 22:55:56.690347  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1008 22:55:56.735433  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1008 22:55:56.735617  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1008 22:55:56.735514  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1008 22:55:56.735541  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1008 22:55:56.735755  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1008 22:55:56.735582  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1008 22:55:56.735834  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1008 22:55:56.735887  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1008 22:55:56.763142  185103 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1008 22:55:56.763226  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1008 22:55:57.106077  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1008 22:55:57.239831  185103 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1008 22:55:57.239899  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1008 22:55:57.338799  185103 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1008 22:55:57.339088  185103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:55:59.011772  185103 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.771851209s)
	I1008 22:55:59.011804  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1008 22:55:59.011824  185103 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1008 22:55:59.011874  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1008 22:55:59.011952  185103 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.672782821s)
	I1008 22:55:59.011980  185103 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1008 22:55:59.012010  185103 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:55:59.012041  185103 ssh_runner.go:195] Run: which crictl
	I1008 22:56:00.943401  185103 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.931501252s)
	I1008 22:56:00.943426  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1008 22:56:00.943444  185103 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1008 22:56:00.943488  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1008 22:56:00.943552  185103 ssh_runner.go:235] Completed: which crictl: (1.931499833s)
	I1008 22:56:00.943580  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:56:02.171716  185103 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.228206339s)
	I1008 22:56:02.171743  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1008 22:56:02.171761  185103 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1008 22:56:02.171813  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1008 22:56:02.171882  185103 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.228291666s)
	I1008 22:56:02.171917  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:56:03.488390  185103 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.316553512s)
	I1008 22:56:03.488416  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1008 22:56:03.488415  185103 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.316482208s)
	I1008 22:56:03.488433  185103 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1008 22:56:03.488482  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1008 22:56:03.488482  185103 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:56:04.879784  185103 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.391279134s)
	I1008 22:56:04.879812  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1008 22:56:04.879830  185103 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1008 22:56:04.879875  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1008 22:56:04.879943  185103 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.3914052s)
	I1008 22:56:04.879971  185103 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 22:56:04.880033  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 22:56:08.411168  185103 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.531269353s)
	I1008 22:56:08.411193  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1008 22:56:08.411211  185103 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.531162004s)
	I1008 22:56:08.411234  185103 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1008 22:56:08.411259  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1008 22:56:08.491592  185103 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 22:56:08.491660  185103 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1008 22:56:09.069545  185103 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 22:56:09.069588  185103 cache_images.go:124] Successfully loaded all cached images
	I1008 22:56:09.069595  185103 cache_images.go:93] duration metric: took 13.115808653s to LoadCachedImages
	I1008 22:56:09.069620  185103 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 22:56:09.069778  185103 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-939665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:56:09.069877  185103 ssh_runner.go:195] Run: crio config
	I1008 22:56:09.148278  185103 cni.go:84] Creating CNI manager for ""
	I1008 22:56:09.148303  185103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:56:09.148321  185103 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:56:09.148349  185103 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-939665 NodeName:no-preload-939665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:56:09.148471  185103 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-939665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:56:09.148546  185103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:56:09.158056  185103 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1008 22:56:09.158132  185103 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1008 22:56:09.167157  185103 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1008 22:56:09.167259  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1008 22:56:09.167314  185103 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1008 22:56:09.167372  185103 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1008 22:56:09.171927  185103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1008 22:56:09.172006  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1008 22:56:10.266328  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1008 22:56:10.270797  185103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1008 22:56:10.270901  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1008 22:56:10.370637  185103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:56:10.394667  185103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1008 22:56:10.404580  185103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1008 22:56:10.404626  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1008 22:56:10.875236  185103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:56:10.882989  185103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 22:56:10.897113  185103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:56:10.910479  185103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1008 22:56:10.923684  185103 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:56:10.928610  185103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:56:10.938822  185103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:56:11.062578  185103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:56:11.079679  185103 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665 for IP: 192.168.85.2
	I1008 22:56:11.079702  185103 certs.go:195] generating shared ca certs ...
	I1008 22:56:11.079718  185103 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:11.079932  185103 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:56:11.080006  185103 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:56:11.080018  185103 certs.go:257] generating profile certs ...
	I1008 22:56:11.080090  185103 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.key
	I1008 22:56:11.080108  185103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt with IP's: []
	I1008 22:56:11.179414  185103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt ...
	I1008 22:56:11.179447  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: {Name:mk92025663614c85bf0e78d86336187fceed93b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:11.179646  185103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.key ...
	I1008 22:56:11.179660  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.key: {Name:mk0d1070feab1b85cbfc228552591e46a664918b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:11.179753  185103 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key.108ea954
	I1008 22:56:11.179772  185103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt.108ea954 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1008 22:56:11.799068  185103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt.108ea954 ...
	I1008 22:56:11.799100  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt.108ea954: {Name:mk7052a60ac9c7cb622337f566b9b76891a67377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:11.799291  185103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key.108ea954 ...
	I1008 22:56:11.799305  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key.108ea954: {Name:mk082d7a96294b6a4d037841a8d50510c51c2c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:11.799390  185103 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt.108ea954 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt
	I1008 22:56:11.799469  185103 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key.108ea954 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key
	I1008 22:56:11.799536  185103 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key
	I1008 22:56:11.799557  185103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.crt with IP's: []
	I1008 22:56:12.094849  185103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.crt ...
	I1008 22:56:12.094880  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.crt: {Name:mkd8083018ff0e2d3719f25b47396e54f4ad2754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:12.095071  185103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key ...
	I1008 22:56:12.095087  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key: {Name:mkeff6d6fa52fdd016724b6dcebfc03c9ff1a560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:12.095284  185103 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:56:12.095333  185103 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:56:12.095349  185103 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:56:12.095377  185103 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:56:12.095410  185103 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:56:12.095438  185103 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:56:12.095484  185103 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:56:12.096468  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:56:12.123813  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:56:12.144550  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:56:12.163869  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:56:12.182334  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 22:56:12.200923  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:56:12.220083  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:56:12.238559  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:56:12.256942  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:56:12.279319  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:56:12.299520  185103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:56:12.317309  185103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:56:12.329877  185103 ssh_runner.go:195] Run: openssl version
	I1008 22:56:12.339575  185103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:56:12.348769  185103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:56:12.352650  185103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:56:12.352750  185103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:56:12.395494  185103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:56:12.404206  185103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:56:12.412839  185103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:56:12.416774  185103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:56:12.416894  185103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:56:12.457932  185103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:56:12.466281  185103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:56:12.474576  185103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:56:12.478489  185103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:56:12.478560  185103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:56:12.519419  185103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:56:12.527690  185103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:56:12.531178  185103 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:56:12.531266  185103 kubeadm.go:400] StartCluster: {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:56:12.531356  185103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:56:12.531417  185103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:56:12.562074  185103 cri.go:89] found id: ""
	I1008 22:56:12.562151  185103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:56:12.569841  185103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:56:12.577567  185103 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:56:12.577734  185103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:56:12.585371  185103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:56:12.585439  185103 kubeadm.go:157] found existing configuration files:
	
	I1008 22:56:12.585505  185103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:56:12.592928  185103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:56:12.593045  185103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:56:12.600223  185103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:56:12.609278  185103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:56:12.609413  185103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:56:12.617781  185103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:56:12.627289  185103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:56:12.627393  185103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:56:12.635389  185103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:56:12.645205  185103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:56:12.645320  185103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:56:12.653700  185103 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:56:12.700921  185103 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:56:12.702081  185103 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:56:12.734252  185103 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:56:12.734547  185103 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:56:12.734626  185103 kubeadm.go:318] OS: Linux
	I1008 22:56:12.734697  185103 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:56:12.734773  185103 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:56:12.734843  185103 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:56:12.734921  185103 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:56:12.734995  185103 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:56:12.735073  185103 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:56:12.735143  185103 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:56:12.735218  185103 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:56:12.735293  185103 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:56:12.806737  185103 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:56:12.806884  185103 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:56:12.806986  185103 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:56:12.823784  185103 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:56:12.832045  185103 out.go:252]   - Generating certificates and keys ...
	I1008 22:56:12.832177  185103 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:56:12.832275  185103 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:56:13.304836  185103 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:56:13.907635  185103 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:56:14.767829  185103 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:56:14.937739  185103 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:56:15.127271  185103 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:56:15.127616  185103 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-939665] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:56:15.557652  185103 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:56:15.558078  185103 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-939665] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:56:15.990335  185103 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:56:16.355465  185103 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:56:16.605938  185103 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:56:16.606251  185103 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:56:17.276624  185103 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:56:17.763558  185103 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:56:18.513004  185103 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:56:19.050337  185103 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:56:19.232665  185103 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:56:19.233259  185103 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:56:19.236120  185103 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:56:19.242079  185103 out.go:252]   - Booting up control plane ...
	I1008 22:56:19.242196  185103 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:56:19.242278  185103 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:56:19.242543  185103 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:56:19.262513  185103 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:56:19.262639  185103 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:56:19.271181  185103 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:56:19.271609  185103 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:56:19.271689  185103 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:56:19.410847  185103 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:56:19.410973  185103 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:56:21.914008  185103 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501592163s
	I1008 22:56:21.916249  185103 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:56:21.916584  185103 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1008 22:56:21.916902  185103 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:56:21.917661  185103 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:56:25.995848  185103 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.077770308s
	I1008 22:56:27.328794  185103 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.410516435s
	I1008 22:56:28.918819  185103 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001500545s
	I1008 22:56:28.937814  185103 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 22:56:28.959525  185103 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 22:56:28.974570  185103 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 22:56:28.975071  185103 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-939665 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 22:56:28.988904  185103 kubeadm.go:318] [bootstrap-token] Using token: 9dv9qh.z2fpvr35p4l5zk02
	I1008 22:56:28.991849  185103 out.go:252]   - Configuring RBAC rules ...
	I1008 22:56:28.991983  185103 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 22:56:28.996463  185103 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 22:56:29.008618  185103 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 22:56:29.013201  185103 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 22:56:29.017793  185103 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 22:56:29.024492  185103 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 22:56:29.325856  185103 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 22:56:29.788623  185103 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 22:56:30.328988  185103 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 22:56:30.329006  185103 kubeadm.go:318] 
	I1008 22:56:30.329070  185103 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 22:56:30.329075  185103 kubeadm.go:318] 
	I1008 22:56:30.329164  185103 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 22:56:30.329170  185103 kubeadm.go:318] 
	I1008 22:56:30.329196  185103 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 22:56:30.329258  185103 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 22:56:30.329311  185103 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 22:56:30.329315  185103 kubeadm.go:318] 
	I1008 22:56:30.329372  185103 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 22:56:30.329376  185103 kubeadm.go:318] 
	I1008 22:56:30.329426  185103 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 22:56:30.329430  185103 kubeadm.go:318] 
	I1008 22:56:30.329484  185103 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 22:56:30.329563  185103 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 22:56:30.329667  185103 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 22:56:30.329677  185103 kubeadm.go:318] 
	I1008 22:56:30.329774  185103 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 22:56:30.329854  185103 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 22:56:30.329859  185103 kubeadm.go:318] 
	I1008 22:56:30.329945  185103 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 9dv9qh.z2fpvr35p4l5zk02 \
	I1008 22:56:30.330052  185103 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 22:56:30.330073  185103 kubeadm.go:318] 	--control-plane 
	I1008 22:56:30.330078  185103 kubeadm.go:318] 
	I1008 22:56:30.330166  185103 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 22:56:30.330170  185103 kubeadm.go:318] 
	I1008 22:56:30.330255  185103 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 9dv9qh.z2fpvr35p4l5zk02 \
	I1008 22:56:30.330360  185103 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 22:56:30.333440  185103 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:56:30.333705  185103 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:56:30.333815  185103 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:56:30.333831  185103 cni.go:84] Creating CNI manager for ""
	I1008 22:56:30.333839  185103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:56:30.336759  185103 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 22:56:30.339690  185103 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 22:56:30.343851  185103 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 22:56:30.343872  185103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 22:56:30.357747  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 22:56:30.668064  185103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 22:56:30.668197  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:30.668271  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-939665 minikube.k8s.io/updated_at=2025_10_08T22_56_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=no-preload-939665 minikube.k8s.io/primary=true
	I1008 22:56:30.854747  185103 ops.go:34] apiserver oom_adj: -16
	I1008 22:56:30.854906  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:31.355979  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:31.855088  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:32.355640  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:32.855014  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:33.355295  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:33.855567  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:34.355530  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:34.855897  185103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:56:34.947598  185103 kubeadm.go:1113] duration metric: took 4.279445966s to wait for elevateKubeSystemPrivileges
	I1008 22:56:34.947629  185103 kubeadm.go:402] duration metric: took 22.416397616s to StartCluster
	I1008 22:56:34.947646  185103 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:34.947736  185103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:56:34.948416  185103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:56:34.948619  185103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 22:56:34.948633  185103 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:56:34.948879  185103 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:56:34.948927  185103 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:56:34.949001  185103 addons.go:69] Setting storage-provisioner=true in profile "no-preload-939665"
	I1008 22:56:34.949015  185103 addons.go:238] Setting addon storage-provisioner=true in "no-preload-939665"
	I1008 22:56:34.949043  185103 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:56:34.949518  185103 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:56:34.949961  185103 addons.go:69] Setting default-storageclass=true in profile "no-preload-939665"
	I1008 22:56:34.949986  185103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-939665"
	I1008 22:56:34.950276  185103 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:56:34.952623  185103 out.go:179] * Verifying Kubernetes components...
	I1008 22:56:34.956397  185103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:56:34.991030  185103 addons.go:238] Setting addon default-storageclass=true in "no-preload-939665"
	I1008 22:56:34.991068  185103 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:56:34.991473  185103 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:56:34.993721  185103 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:56:34.999037  185103 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:56:34.999065  185103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:56:34.999131  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:56:35.044576  185103 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:56:35.044599  185103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:56:35.044660  185103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:56:35.062247  185103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:56:35.079266  185103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33061 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:56:35.331426  185103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:56:35.361591  185103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 22:56:35.361744  185103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:56:35.365519  185103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:56:36.336214  185103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.004753378s)
	I1008 22:56:36.336993  185103 node_ready.go:35] waiting up to 6m0s for node "no-preload-939665" to be "Ready" ...
	I1008 22:56:36.337286  185103 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1008 22:56:36.398409  185103 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 22:56:36.401401  185103 addons.go:514] duration metric: took 1.452451461s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 22:56:36.842341  185103 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-939665" context rescaled to 1 replicas
	W1008 22:56:38.340347  185103 node_ready.go:57] node "no-preload-939665" has "Ready":"False" status (will retry)
	W1008 22:56:40.840374  185103 node_ready.go:57] node "no-preload-939665" has "Ready":"False" status (will retry)
	W1008 22:56:43.340536  185103 node_ready.go:57] node "no-preload-939665" has "Ready":"False" status (will retry)
	W1008 22:56:45.342042  185103 node_ready.go:57] node "no-preload-939665" has "Ready":"False" status (will retry)
	W1008 22:56:47.840878  185103 node_ready.go:57] node "no-preload-939665" has "Ready":"False" status (will retry)
	I1008 22:56:48.849738  185103 node_ready.go:49] node "no-preload-939665" is "Ready"
	I1008 22:56:48.849770  185103 node_ready.go:38] duration metric: took 12.512750562s for node "no-preload-939665" to be "Ready" ...
	I1008 22:56:48.849787  185103 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:56:48.849846  185103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:56:48.880619  185103 api_server.go:72] duration metric: took 13.931957573s to wait for apiserver process to appear ...
	I1008 22:56:48.880643  185103 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:56:48.880662  185103 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:56:48.890053  185103 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 22:56:48.891212  185103 api_server.go:141] control plane version: v1.34.1
	I1008 22:56:48.891240  185103 api_server.go:131] duration metric: took 10.589555ms to wait for apiserver health ...
	I1008 22:56:48.891249  185103 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:56:48.894254  185103 system_pods.go:59] 8 kube-system pods found
	I1008 22:56:48.894301  185103 system_pods.go:61] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:56:48.894309  185103 system_pods.go:61] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running
	I1008 22:56:48.894315  185103 system_pods.go:61] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:56:48.894319  185103 system_pods.go:61] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running
	I1008 22:56:48.894325  185103 system_pods.go:61] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running
	I1008 22:56:48.894331  185103 system_pods.go:61] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:56:48.894343  185103 system_pods.go:61] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running
	I1008 22:56:48.894350  185103 system_pods.go:61] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:56:48.894356  185103 system_pods.go:74] duration metric: took 3.100598ms to wait for pod list to return data ...
	I1008 22:56:48.894364  185103 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:56:48.896883  185103 default_sa.go:45] found service account: "default"
	I1008 22:56:48.896904  185103 default_sa.go:55] duration metric: took 2.533962ms for default service account to be created ...
	I1008 22:56:48.896913  185103 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:56:48.900027  185103 system_pods.go:86] 8 kube-system pods found
	I1008 22:56:48.900061  185103 system_pods.go:89] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:56:48.900068  185103 system_pods.go:89] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running
	I1008 22:56:48.900074  185103 system_pods.go:89] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:56:48.900079  185103 system_pods.go:89] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running
	I1008 22:56:48.900084  185103 system_pods.go:89] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running
	I1008 22:56:48.900088  185103 system_pods.go:89] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:56:48.900093  185103 system_pods.go:89] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running
	I1008 22:56:48.900099  185103 system_pods.go:89] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:56:48.900117  185103 retry.go:31] will retry after 300.991627ms: missing components: kube-dns
	I1008 22:56:49.205963  185103 system_pods.go:86] 8 kube-system pods found
	I1008 22:56:49.205996  185103 system_pods.go:89] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:56:49.206002  185103 system_pods.go:89] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running
	I1008 22:56:49.206008  185103 system_pods.go:89] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:56:49.206012  185103 system_pods.go:89] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running
	I1008 22:56:49.206017  185103 system_pods.go:89] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running
	I1008 22:56:49.206020  185103 system_pods.go:89] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:56:49.206024  185103 system_pods.go:89] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running
	I1008 22:56:49.206029  185103 system_pods.go:89] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:56:49.206042  185103 retry.go:31] will retry after 367.792336ms: missing components: kube-dns
	I1008 22:56:49.577819  185103 system_pods.go:86] 8 kube-system pods found
	I1008 22:56:49.577854  185103 system_pods.go:89] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:56:49.577861  185103 system_pods.go:89] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running
	I1008 22:56:49.577868  185103 system_pods.go:89] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:56:49.577872  185103 system_pods.go:89] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running
	I1008 22:56:49.577877  185103 system_pods.go:89] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running
	I1008 22:56:49.577882  185103 system_pods.go:89] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:56:49.577886  185103 system_pods.go:89] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running
	I1008 22:56:49.577891  185103 system_pods.go:89] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:56:49.577905  185103 retry.go:31] will retry after 432.694986ms: missing components: kube-dns
	I1008 22:56:50.015719  185103 system_pods.go:86] 8 kube-system pods found
	I1008 22:56:50.015758  185103 system_pods.go:89] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Running
	I1008 22:56:50.015766  185103 system_pods.go:89] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running
	I1008 22:56:50.015770  185103 system_pods.go:89] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:56:50.015775  185103 system_pods.go:89] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running
	I1008 22:56:50.015780  185103 system_pods.go:89] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running
	I1008 22:56:50.015783  185103 system_pods.go:89] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:56:50.015788  185103 system_pods.go:89] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running
	I1008 22:56:50.015792  185103 system_pods.go:89] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Running
	I1008 22:56:50.015800  185103 system_pods.go:126] duration metric: took 1.118881672s to wait for k8s-apps to be running ...
	I1008 22:56:50.015810  185103 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:56:50.015873  185103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:56:50.036607  185103 system_svc.go:56] duration metric: took 20.788255ms WaitForService to wait for kubelet
	I1008 22:56:50.036633  185103 kubeadm.go:586] duration metric: took 15.087975982s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:56:50.036651  185103 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:56:50.039735  185103 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:56:50.039768  185103 node_conditions.go:123] node cpu capacity is 2
	I1008 22:56:50.039782  185103 node_conditions.go:105] duration metric: took 3.125657ms to run NodePressure ...
	I1008 22:56:50.039794  185103 start.go:241] waiting for startup goroutines ...
	I1008 22:56:50.039802  185103 start.go:246] waiting for cluster config update ...
	I1008 22:56:50.039840  185103 start.go:255] writing updated cluster config ...
	I1008 22:56:50.040165  185103 ssh_runner.go:195] Run: rm -f paused
	I1008 22:56:50.043866  185103 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:56:50.048482  185103 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.053520  185103 pod_ready.go:94] pod "coredns-66bc5c9577-wj8wf" is "Ready"
	I1008 22:56:50.053544  185103 pod_ready.go:86] duration metric: took 5.033388ms for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.055931  185103 pod_ready.go:83] waiting for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.060945  185103 pod_ready.go:94] pod "etcd-no-preload-939665" is "Ready"
	I1008 22:56:50.060972  185103 pod_ready.go:86] duration metric: took 5.012777ms for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.063581  185103 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.068352  185103 pod_ready.go:94] pod "kube-apiserver-no-preload-939665" is "Ready"
	I1008 22:56:50.068422  185103 pod_ready.go:86] duration metric: took 4.810395ms for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.070985  185103 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.448184  185103 pod_ready.go:94] pod "kube-controller-manager-no-preload-939665" is "Ready"
	I1008 22:56:50.448213  185103 pod_ready.go:86] duration metric: took 377.197741ms for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:50.648689  185103 pod_ready.go:83] waiting for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:51.060293  185103 pod_ready.go:94] pod "kube-proxy-77lvp" is "Ready"
	I1008 22:56:51.060331  185103 pod_ready.go:86] duration metric: took 411.614914ms for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:51.248327  185103 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:51.648116  185103 pod_ready.go:94] pod "kube-scheduler-no-preload-939665" is "Ready"
	I1008 22:56:51.648192  185103 pod_ready.go:86] duration metric: took 399.836381ms for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:56:51.648213  185103 pod_ready.go:40] duration metric: took 1.604308738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:56:51.704017  185103 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:56:51.707147  185103 out.go:179] * Done! kubectl is now configured to use "no-preload-939665" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:56:49 no-preload-939665 crio[840]: time="2025-10-08T22:56:49.222678867Z" level=info msg="Created container 50fb1f0c1ae59d0295def993c8d2f7e4e7c2ad70bab9f4f05ee882b464a4f3e2: kube-system/coredns-66bc5c9577-wj8wf/coredns" id=1fd7e21d-3052-403f-9fd7-6fa292b070c2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:56:49 no-preload-939665 crio[840]: time="2025-10-08T22:56:49.225930368Z" level=info msg="Starting container: 50fb1f0c1ae59d0295def993c8d2f7e4e7c2ad70bab9f4f05ee882b464a4f3e2" id=c3c4fd93-518c-49ee-b293-a2407c98dc20 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:56:49 no-preload-939665 crio[840]: time="2025-10-08T22:56:49.236905708Z" level=info msg="Started container" PID=2456 containerID=50fb1f0c1ae59d0295def993c8d2f7e4e7c2ad70bab9f4f05ee882b464a4f3e2 description=kube-system/coredns-66bc5c9577-wj8wf/coredns id=c3c4fd93-518c-49ee-b293-a2407c98dc20 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1f81262b87059d91443633ab9531f08e46d5a8ffe0149d2aecac1bbae8f8f75
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.224226676Z" level=info msg="Running pod sandbox: default/busybox/POD" id=412cb725-2c51-4441-9658-5254aa0e7ce5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.224303411Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.230016627Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ad2fd18c95e9eeea711fec4635fbbb2793abe3535d3dab2fe500385a5f0529f7 UID:64834f84-6d88-49e8-81ae-196f4a2bd678 NetNS:/var/run/netns/592104f4-dda8-4398-846a-632fe8d549b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079d98}] Aliases:map[]}"
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.230200489Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.243913495Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ad2fd18c95e9eeea711fec4635fbbb2793abe3535d3dab2fe500385a5f0529f7 UID:64834f84-6d88-49e8-81ae-196f4a2bd678 NetNS:/var/run/netns/592104f4-dda8-4398-846a-632fe8d549b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079d98}] Aliases:map[]}"
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.244253076Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.254795788Z" level=info msg="Ran pod sandbox ad2fd18c95e9eeea711fec4635fbbb2793abe3535d3dab2fe500385a5f0529f7 with infra container: default/busybox/POD" id=412cb725-2c51-4441-9658-5254aa0e7ce5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.256002883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43a0e0af-752c-486d-a986-ae1a056d8f68 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.256151676Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=43a0e0af-752c-486d-a986-ae1a056d8f68 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.256201138Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=43a0e0af-752c-486d-a986-ae1a056d8f68 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.257092709Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6d18d16b-a0bb-46d6-90b6-f2f717897142 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:56:52 no-preload-939665 crio[840]: time="2025-10-08T22:56:52.258982873Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.432838641Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6d18d16b-a0bb-46d6-90b6-f2f717897142 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.437366735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db97adfd-72fe-4773-88b4-e1264ae92e8b name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.440668Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5d7c1dea-2a68-4257-ada1-ba3269e1125f name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.447247482Z" level=info msg="Creating container: default/busybox/busybox" id=7eba1af1-ad13-482e-976b-d69432e7183c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.448023828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.452763403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.45338298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.468056652Z" level=info msg="Created container b9f7c68242d0dfa60a420a45d7d09a93bccf8e55bbb15b033b05d260212dc385: default/busybox/busybox" id=7eba1af1-ad13-482e-976b-d69432e7183c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.469132406Z" level=info msg="Starting container: b9f7c68242d0dfa60a420a45d7d09a93bccf8e55bbb15b033b05d260212dc385" id=a1c9cda9-b9a6-4115-9a92-31f783105473 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:56:54 no-preload-939665 crio[840]: time="2025-10-08T22:56:54.471659755Z" level=info msg="Started container" PID=2517 containerID=b9f7c68242d0dfa60a420a45d7d09a93bccf8e55bbb15b033b05d260212dc385 description=default/busybox/busybox id=a1c9cda9-b9a6-4115-9a92-31f783105473 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad2fd18c95e9eeea711fec4635fbbb2793abe3535d3dab2fe500385a5f0529f7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b9f7c68242d0d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   ad2fd18c95e9e       busybox                                     default
	50fb1f0c1ae59       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   b1f81262b8705       coredns-66bc5c9577-wj8wf                    kube-system
	5362c451024ff       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   c22ddbb9bb25c       storage-provisioner                         kube-system
	c00223a604e70       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   9ff6816a7a783       kindnet-dhln4                               kube-system
	4edceb993423b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      26 seconds ago      Running             kube-proxy                0                   1fc44aa881f53       kube-proxy-77lvp                            kube-system
	0c8b5276ff7b6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      39 seconds ago      Running             kube-controller-manager   0                   5ad63c5ab0dd9       kube-controller-manager-no-preload-939665   kube-system
	cf70796e3cb89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      39 seconds ago      Running             kube-apiserver            0                   571aa0a2654de       kube-apiserver-no-preload-939665            kube-system
	090c68ac3c82b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      39 seconds ago      Running             etcd                      0                   b1ff03e98f9df       etcd-no-preload-939665                      kube-system
	56fe6474b246b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      39 seconds ago      Running             kube-scheduler            0                   8191e26cb4494       kube-scheduler-no-preload-939665            kube-system
	
	
	==> coredns [50fb1f0c1ae59d0295def993c8d2f7e4e7c2ad70bab9f4f05ee882b464a4f3e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58809 - 51231 "HINFO IN 7594481054887745209.4494873752695370940. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029437529s
	
	
	==> describe nodes <==
	Name:               no-preload-939665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-939665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=no-preload-939665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_56_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:56:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-939665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:57:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:57:00 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:57:00 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:57:00 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:57:00 +0000   Wed, 08 Oct 2025 22:56:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-939665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 809271b5f2b042e8b597949b968712fa
	  System UUID:                bdda0eaf-05ab-4058-9e68-44ec4f323643
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-wj8wf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-no-preload-939665                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-dhln4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-939665             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-939665    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-77lvp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-939665             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                node-controller  Node no-preload-939665 event: Registered Node no-preload-939665 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-939665 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 22:27] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [090c68ac3c82b73c3fd46e696f25f6293229622d8964a4e90fd788641224699f] <==
	{"level":"warn","ts":"2025-10-08T22:56:26.054119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.069704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.081407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.095306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.108826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.123518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.137834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.158229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.172356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.207430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.218299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.234523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.249776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.263870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.280895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.294457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.309154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.326080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.341366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.358804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.375973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.408467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.423445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.438769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:56:26.503607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33814","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:57:01 up  1:39,  0 user,  load average: 1.33, 1.36, 1.64
	Linux no-preload-939665 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c00223a604e70a7f2e7691904b115c8ee692956d00a03f1aa040d695a263a474] <==
	I1008 22:56:38.410662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:56:38.410998       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 22:56:38.411134       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:56:38.411152       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:56:38.411165       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:56:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:56:38.702705       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:56:38.702737       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:56:38.702747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:56:38.702846       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1008 22:56:39.003172       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:56:39.003324       1 metrics.go:72] Registering metrics
	I1008 22:56:39.003414       1 controller.go:711] "Syncing nftables rules"
	I1008 22:56:48.709545       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:56:48.709606       1 main.go:301] handling current node
	I1008 22:56:58.704129       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:56:58.704173       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cf70796e3cb899519fbeb5e4e73a5204f34dcfb6f6bfc2d5f816cb5e353871c3] <==
	I1008 22:56:27.338026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 22:56:27.338387       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 22:56:27.339263       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 22:56:27.339467       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1008 22:56:27.361810       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 22:56:27.365290       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:56:27.365296       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:56:28.038278       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1008 22:56:28.046891       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1008 22:56:28.046921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:56:28.745217       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:56:28.840460       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:56:28.946869       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 22:56:28.960573       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1008 22:56:28.961745       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:56:28.971657       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:56:29.184496       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:56:29.759649       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 22:56:29.787083       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 22:56:29.800473       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 22:56:34.943547       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 22:56:35.104641       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:56:35.125698       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:56:35.191026       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1008 22:57:00.121197       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54522: use of closed network connection
	
	
	==> kube-controller-manager [0c8b5276ff7b658979383700e084bd490db9c150588068e53571ce8bda399b8e] <==
	I1008 22:56:34.190332       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1008 22:56:34.190216       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 22:56:34.199471       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-939665" podCIDRs=["10.244.0.0/24"]
	I1008 22:56:34.209822       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 22:56:34.225162       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 22:56:34.231558       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 22:56:34.231606       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:56:34.231705       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:56:34.231788       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-939665"
	I1008 22:56:34.231835       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 22:56:34.232149       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 22:56:34.236732       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1008 22:56:34.236790       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:56:34.237027       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 22:56:34.237096       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 22:56:34.237143       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:56:34.237189       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 22:56:34.237216       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 22:56:34.237249       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1008 22:56:34.236773       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 22:56:34.240116       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:56:34.240659       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1008 22:56:34.240675       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1008 22:56:34.242306       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:56:49.239458       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4edceb993423b8ff5ae533e676ede04ea395e45c10290a029f974f8c82a18e0d] <==
	I1008 22:56:35.827076       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:56:35.953588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:56:36.056400       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:56:36.056448       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 22:56:36.056532       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:56:36.098817       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:56:36.098867       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:56:36.108932       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:56:36.109286       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:56:36.109299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:56:36.110842       1 config.go:200] "Starting service config controller"
	I1008 22:56:36.110853       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:56:36.110869       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:56:36.110873       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:56:36.110883       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:56:36.110887       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:56:36.111527       1 config.go:309] "Starting node config controller"
	I1008 22:56:36.111535       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:56:36.111546       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:56:36.211304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 22:56:36.211337       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:56:36.211377       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [56fe6474b246b1747845383da89890f05ca6eb81fde409b85a51ad84c38ededc] <==
	E1008 22:56:27.329527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 22:56:27.329559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 22:56:27.329900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:56:27.330000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 22:56:27.330037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 22:56:27.330070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 22:56:27.330100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 22:56:27.340591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 22:56:27.340739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 22:56:27.340920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 22:56:27.341044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:56:27.341157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 22:56:27.341241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 22:56:27.340516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 22:56:28.137685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 22:56:28.155578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:56:28.175405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:56:28.184698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 22:56:28.214876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 22:56:28.215890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 22:56:28.267004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 22:56:28.294775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 22:56:28.360695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 22:56:28.785732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1008 22:56:30.816491       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.269291    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2-xtables-lock\") pod \"kube-proxy-77lvp\" (UID: \"7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2\") " pod="kube-system/kube-proxy-77lvp"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.269350    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2-lib-modules\") pod \"kube-proxy-77lvp\" (UID: \"7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2\") " pod="kube-system/kube-proxy-77lvp"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.269376    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2-kube-proxy\") pod \"kube-proxy-77lvp\" (UID: \"7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2\") " pod="kube-system/kube-proxy-77lvp"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.269395    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdbc\" (UniqueName: \"kubernetes.io/projected/7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2-kube-api-access-ltdbc\") pod \"kube-proxy-77lvp\" (UID: \"7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2\") " pod="kube-system/kube-proxy-77lvp"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.370270    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/41ab815b-433a-4ad3-b87b-a95a7085d8a1-cni-cfg\") pod \"kindnet-dhln4\" (UID: \"41ab815b-433a-4ad3-b87b-a95a7085d8a1\") " pod="kube-system/kindnet-dhln4"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.370318    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41ab815b-433a-4ad3-b87b-a95a7085d8a1-xtables-lock\") pod \"kindnet-dhln4\" (UID: \"41ab815b-433a-4ad3-b87b-a95a7085d8a1\") " pod="kube-system/kindnet-dhln4"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.370339    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41ab815b-433a-4ad3-b87b-a95a7085d8a1-lib-modules\") pod \"kindnet-dhln4\" (UID: \"41ab815b-433a-4ad3-b87b-a95a7085d8a1\") " pod="kube-system/kindnet-dhln4"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.370381    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2pp\" (UniqueName: \"kubernetes.io/projected/41ab815b-433a-4ad3-b87b-a95a7085d8a1-kube-api-access-bp2pp\") pod \"kindnet-dhln4\" (UID: \"41ab815b-433a-4ad3-b87b-a95a7085d8a1\") " pod="kube-system/kindnet-dhln4"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.458554    1981 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: W1008 22:56:35.575288    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-1fc44aa881f53c000bb359d108c7763e23f6705af3dc120f2a42f4f3d7e81154 WatchSource:0}: Error finding container 1fc44aa881f53c000bb359d108c7763e23f6705af3dc120f2a42f4f3d7e81154: Status 404 returned error can't find the container with id 1fc44aa881f53c000bb359d108c7763e23f6705af3dc120f2a42f4f3d7e81154
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: W1008 22:56:35.628613    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-9ff6816a7a783573fd363ed7c6d9287aabf1c6fe407f90a269255a3761a26289 WatchSource:0}: Error finding container 9ff6816a7a783573fd363ed7c6d9287aabf1c6fe407f90a269255a3761a26289: Status 404 returned error can't find the container with id 9ff6816a7a783573fd363ed7c6d9287aabf1c6fe407f90a269255a3761a26289
	Oct 08 22:56:35 no-preload-939665 kubelet[1981]: I1008 22:56:35.879671    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-77lvp" podStartSLOduration=0.879654574 podStartE2EDuration="879.654574ms" podCreationTimestamp="2025-10-08 22:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:56:35.849291503 +0000 UTC m=+6.254072850" watchObservedRunningTime="2025-10-08 22:56:35.879654574 +0000 UTC m=+6.284435879"
	Oct 08 22:56:39 no-preload-939665 kubelet[1981]: I1008 22:56:39.058724    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dhln4" podStartSLOduration=1.37902274 podStartE2EDuration="4.058705524s" podCreationTimestamp="2025-10-08 22:56:35 +0000 UTC" firstStartedPulling="2025-10-08 22:56:35.63815569 +0000 UTC m=+6.042937004" lastFinishedPulling="2025-10-08 22:56:38.317838482 +0000 UTC m=+8.722619788" observedRunningTime="2025-10-08 22:56:38.843242267 +0000 UTC m=+9.248023589" watchObservedRunningTime="2025-10-08 22:56:39.058705524 +0000 UTC m=+9.463486830"
	Oct 08 22:56:48 no-preload-939665 kubelet[1981]: I1008 22:56:48.799428    1981 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 08 22:56:48 no-preload-939665 kubelet[1981]: I1008 22:56:48.876144    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhzj7\" (UniqueName: \"kubernetes.io/projected/a4b8c0c9-d983-4a71-b7d3-6fd64717accb-kube-api-access-fhzj7\") pod \"coredns-66bc5c9577-wj8wf\" (UID: \"a4b8c0c9-d983-4a71-b7d3-6fd64717accb\") " pod="kube-system/coredns-66bc5c9577-wj8wf"
	Oct 08 22:56:48 no-preload-939665 kubelet[1981]: I1008 22:56:48.876348    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtpx\" (UniqueName: \"kubernetes.io/projected/c9b0b18d-b8ca-4994-99c4-d6485cc58032-kube-api-access-xvtpx\") pod \"storage-provisioner\" (UID: \"c9b0b18d-b8ca-4994-99c4-d6485cc58032\") " pod="kube-system/storage-provisioner"
	Oct 08 22:56:48 no-preload-939665 kubelet[1981]: I1008 22:56:48.876443    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4b8c0c9-d983-4a71-b7d3-6fd64717accb-config-volume\") pod \"coredns-66bc5c9577-wj8wf\" (UID: \"a4b8c0c9-d983-4a71-b7d3-6fd64717accb\") " pod="kube-system/coredns-66bc5c9577-wj8wf"
	Oct 08 22:56:48 no-preload-939665 kubelet[1981]: I1008 22:56:48.876535    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c9b0b18d-b8ca-4994-99c4-d6485cc58032-tmp\") pod \"storage-provisioner\" (UID: \"c9b0b18d-b8ca-4994-99c4-d6485cc58032\") " pod="kube-system/storage-provisioner"
	Oct 08 22:56:49 no-preload-939665 kubelet[1981]: W1008 22:56:49.161011    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-c22ddbb9bb25cbf63d1297be673f66f6f92f234157108064d74e7b93d70444ef WatchSource:0}: Error finding container c22ddbb9bb25cbf63d1297be673f66f6f92f234157108064d74e7b93d70444ef: Status 404 returned error can't find the container with id c22ddbb9bb25cbf63d1297be673f66f6f92f234157108064d74e7b93d70444ef
	Oct 08 22:56:49 no-preload-939665 kubelet[1981]: W1008 22:56:49.172574    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-b1f81262b87059d91443633ab9531f08e46d5a8ffe0149d2aecac1bbae8f8f75 WatchSource:0}: Error finding container b1f81262b87059d91443633ab9531f08e46d5a8ffe0149d2aecac1bbae8f8f75: Status 404 returned error can't find the container with id b1f81262b87059d91443633ab9531f08e46d5a8ffe0149d2aecac1bbae8f8f75
	Oct 08 22:56:49 no-preload-939665 kubelet[1981]: I1008 22:56:49.884262    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.884246056 podStartE2EDuration="13.884246056s" podCreationTimestamp="2025-10-08 22:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:56:49.868622891 +0000 UTC m=+20.273404214" watchObservedRunningTime="2025-10-08 22:56:49.884246056 +0000 UTC m=+20.289027362"
	Oct 08 22:56:51 no-preload-939665 kubelet[1981]: I1008 22:56:51.914320    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wj8wf" podStartSLOduration=16.914299473 podStartE2EDuration="16.914299473s" podCreationTimestamp="2025-10-08 22:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:56:49.885560829 +0000 UTC m=+20.290342134" watchObservedRunningTime="2025-10-08 22:56:51.914299473 +0000 UTC m=+22.319080787"
	Oct 08 22:56:51 no-preload-939665 kubelet[1981]: I1008 22:56:51.999647    1981 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r422l\" (UniqueName: \"kubernetes.io/projected/64834f84-6d88-49e8-81ae-196f4a2bd678-kube-api-access-r422l\") pod \"busybox\" (UID: \"64834f84-6d88-49e8-81ae-196f4a2bd678\") " pod="default/busybox"
	Oct 08 22:56:52 no-preload-939665 kubelet[1981]: W1008 22:56:52.250981    1981 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-ad2fd18c95e9eeea711fec4635fbbb2793abe3535d3dab2fe500385a5f0529f7 WatchSource:0}: Error finding container ad2fd18c95e9eeea711fec4635fbbb2793abe3535d3dab2fe500385a5f0529f7: Status 404 returned error can't find the container with id ad2fd18c95e9eeea711fec4635fbbb2793abe3535d3dab2fe500385a5f0529f7
	Oct 08 22:56:54 no-preload-939665 kubelet[1981]: I1008 22:56:54.876491    1981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6945624160000001 podStartE2EDuration="3.876475197s" podCreationTimestamp="2025-10-08 22:56:51 +0000 UTC" firstStartedPulling="2025-10-08 22:56:52.256382227 +0000 UTC m=+22.661163533" lastFinishedPulling="2025-10-08 22:56:54.438295008 +0000 UTC m=+24.843076314" observedRunningTime="2025-10-08 22:56:54.876098216 +0000 UTC m=+25.280879530" watchObservedRunningTime="2025-10-08 22:56:54.876475197 +0000 UTC m=+25.281256503"
	
	
	==> storage-provisioner [5362c451024ff68b81d6ba653fc3bb8ad6adecb50d446fbfe32b0a3ea62c4a0d] <==
	I1008 22:56:49.235522       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:56:49.261415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:56:49.261475       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 22:56:49.264946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:49.274539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:56:49.274693       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:56:49.274861       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-939665_2f0c2d23-aaf8-4c41-9323-c67c1cca5e68!
	I1008 22:56:49.278507       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d7db233-93f3-4724-94fd-ba2ce2cb320c", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-939665_2f0c2d23-aaf8-4c41-9323-c67c1cca5e68 became leader
	W1008 22:56:49.298835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:49.310402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:56:49.375348       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-939665_2f0c2d23-aaf8-4c41-9323-c67c1cca5e68!
	W1008 22:56:51.313410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:51.318035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:53.321125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:53.325738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:55.328534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:55.333120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:57.335879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:57.340593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:59.344449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:56:59.349154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:57:01.352651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:57:01.362997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-939665 -n no-preload-939665
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-939665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-939665 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-939665 --alsologtostderr -v=1: exit status 80 (1.812991255s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-939665 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:58:14.615685  191536 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:58:14.615825  191536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:58:14.615837  191536 out.go:374] Setting ErrFile to fd 2...
	I1008 22:58:14.615843  191536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:58:14.616139  191536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:58:14.616425  191536 out.go:368] Setting JSON to false
	I1008 22:58:14.616462  191536 mustload.go:65] Loading cluster: no-preload-939665
	I1008 22:58:14.616887  191536 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:58:14.617429  191536 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:58:14.635581  191536 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:58:14.635932  191536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:58:14.691582  191536 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-08 22:58:14.68251358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:58:14.692220  191536 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-939665 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1008 22:58:14.695540  191536 out.go:179] * Pausing node no-preload-939665 ... 
	I1008 22:58:14.699325  191536 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:58:14.699669  191536 ssh_runner.go:195] Run: systemctl --version
	I1008 22:58:14.699720  191536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:58:14.716662  191536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:58:14.816377  191536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:58:14.828657  191536 pause.go:52] kubelet running: true
	I1008 22:58:14.828733  191536 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:58:15.056969  191536 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:58:15.057068  191536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:58:15.130817  191536 cri.go:89] found id: "92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20"
	I1008 22:58:15.130883  191536 cri.go:89] found id: "bade74eb19946af21f5ffbfb4ffa4e7f81bb41043453f2dca89df500be9f1376"
	I1008 22:58:15.130904  191536 cri.go:89] found id: "c0d2286c0fb19de49b39e27723286f23f37dd0279a1348cf94a2b65a52a99273"
	I1008 22:58:15.130925  191536 cri.go:89] found id: "c28c75461cf867bdf283e13c269bfe255b9c7fc15ced477eb8b068c032bc4178"
	I1008 22:58:15.130955  191536 cri.go:89] found id: "1099fb7bc0b5a6a715edc1ae2c1822b4f424b055875ea1147123708dbca0e939"
	I1008 22:58:15.130980  191536 cri.go:89] found id: "22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3"
	I1008 22:58:15.131004  191536 cri.go:89] found id: "f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d"
	I1008 22:58:15.131031  191536 cri.go:89] found id: "e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e"
	I1008 22:58:15.131050  191536 cri.go:89] found id: "fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee"
	I1008 22:58:15.131079  191536 cri.go:89] found id: "8a83632a73b7920e80de176c3a5ba53ba3266776a89382be87f4612c3f712fe1"
	I1008 22:58:15.131101  191536 cri.go:89] found id: "156ae21a191583af601f44668a0ae6339b9eb2752a19bb2691e28827eb9f58b2"
	I1008 22:58:15.131125  191536 cri.go:89] found id: ""
	I1008 22:58:15.131201  191536 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:58:15.151047  191536 retry.go:31] will retry after 232.794937ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:58:15Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:58:15.384583  191536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:58:15.398658  191536 pause.go:52] kubelet running: false
	I1008 22:58:15.398753  191536 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:58:15.569289  191536 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:58:15.569420  191536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:58:15.648990  191536 cri.go:89] found id: "92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20"
	I1008 22:58:15.649023  191536 cri.go:89] found id: "bade74eb19946af21f5ffbfb4ffa4e7f81bb41043453f2dca89df500be9f1376"
	I1008 22:58:15.649029  191536 cri.go:89] found id: "c0d2286c0fb19de49b39e27723286f23f37dd0279a1348cf94a2b65a52a99273"
	I1008 22:58:15.649033  191536 cri.go:89] found id: "c28c75461cf867bdf283e13c269bfe255b9c7fc15ced477eb8b068c032bc4178"
	I1008 22:58:15.649037  191536 cri.go:89] found id: "1099fb7bc0b5a6a715edc1ae2c1822b4f424b055875ea1147123708dbca0e939"
	I1008 22:58:15.649040  191536 cri.go:89] found id: "22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3"
	I1008 22:58:15.649066  191536 cri.go:89] found id: "f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d"
	I1008 22:58:15.649070  191536 cri.go:89] found id: "e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e"
	I1008 22:58:15.649074  191536 cri.go:89] found id: "fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee"
	I1008 22:58:15.649094  191536 cri.go:89] found id: "8a83632a73b7920e80de176c3a5ba53ba3266776a89382be87f4612c3f712fe1"
	I1008 22:58:15.649104  191536 cri.go:89] found id: "156ae21a191583af601f44668a0ae6339b9eb2752a19bb2691e28827eb9f58b2"
	I1008 22:58:15.649108  191536 cri.go:89] found id: ""
	I1008 22:58:15.649180  191536 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:58:15.662127  191536 retry.go:31] will retry after 391.284948ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:58:15Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:58:16.053743  191536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:58:16.067936  191536 pause.go:52] kubelet running: false
	I1008 22:58:16.068046  191536 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 22:58:16.246634  191536 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 22:58:16.246781  191536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 22:58:16.332804  191536 cri.go:89] found id: "92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20"
	I1008 22:58:16.332878  191536 cri.go:89] found id: "bade74eb19946af21f5ffbfb4ffa4e7f81bb41043453f2dca89df500be9f1376"
	I1008 22:58:16.332900  191536 cri.go:89] found id: "c0d2286c0fb19de49b39e27723286f23f37dd0279a1348cf94a2b65a52a99273"
	I1008 22:58:16.332918  191536 cri.go:89] found id: "c28c75461cf867bdf283e13c269bfe255b9c7fc15ced477eb8b068c032bc4178"
	I1008 22:58:16.332939  191536 cri.go:89] found id: "1099fb7bc0b5a6a715edc1ae2c1822b4f424b055875ea1147123708dbca0e939"
	I1008 22:58:16.332970  191536 cri.go:89] found id: "22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3"
	I1008 22:58:16.332992  191536 cri.go:89] found id: "f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d"
	I1008 22:58:16.333016  191536 cri.go:89] found id: "e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e"
	I1008 22:58:16.333041  191536 cri.go:89] found id: "fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee"
	I1008 22:58:16.333064  191536 cri.go:89] found id: "8a83632a73b7920e80de176c3a5ba53ba3266776a89382be87f4612c3f712fe1"
	I1008 22:58:16.333086  191536 cri.go:89] found id: "156ae21a191583af601f44668a0ae6339b9eb2752a19bb2691e28827eb9f58b2"
	I1008 22:58:16.333110  191536 cri.go:89] found id: ""
	I1008 22:58:16.333187  191536 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 22:58:16.352843  191536 out.go:203] 
	W1008 22:58:16.355882  191536 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:58:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 22:58:16.356087  191536 out.go:285] * 
	* 
	W1008 22:58:16.361773  191536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:58:16.366736  191536 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-939665 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-939665
helpers_test.go:243: (dbg) docker inspect no-preload-939665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4",
	        "Created": "2025-10-08T22:55:51.376878504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 189343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:57:15.052934733Z",
	            "FinishedAt": "2025-10-08T22:57:14.257504308Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/hostname",
	        "HostsPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/hosts",
	        "LogPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4-json.log",
	        "Name": "/no-preload-939665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-939665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-939665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4",
	                "LowerDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-939665",
	                "Source": "/var/lib/docker/volumes/no-preload-939665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-939665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-939665",
	                "name.minikube.sigs.k8s.io": "no-preload-939665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "100e573ca2d387da7af8696d7655863318af52f4290b1916df5dad80e070430d",
	            "SandboxKey": "/var/run/docker/netns/100e573ca2d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-939665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:52:42:85:45:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc660108ce7e425dc8ccc8b9b4c79d2e7285488dbd4605c4f5b483d992fc9478",
	                    "EndpointID": "e38b20ba2334e3c440dd7b4ea47346b3a72a6ccc7a6529503100457228ab0831",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-939665",
	                        "28f143a4ef4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665: exit status 2 (447.943083ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-939665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-939665 logs -n 25: (1.599243648s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:45 UTC │ 08 Oct 25 22:46 UTC │
	│ start   │ -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ delete  │ -p cert-expiration-292528                                                                                                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	│ delete  │ -p force-systemd-env-092546                                                                                                                                                                                                                   │ force-systemd-env-092546  │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:52 UTC │
	│ start   │ -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ cert-options-378019 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ -p cert-options-378019 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                                                                                                    │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:57:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:57:14.782613  189215 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:57:14.782899  189215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:57:14.782916  189215 out.go:374] Setting ErrFile to fd 2...
	I1008 22:57:14.782922  189215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:57:14.783293  189215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:57:14.783741  189215 out.go:368] Setting JSON to false
	I1008 22:57:14.784656  189215 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5985,"bootTime":1759958250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:57:14.784745  189215 start.go:141] virtualization:  
	I1008 22:57:14.787916  189215 out.go:179] * [no-preload-939665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:57:14.791714  189215 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:57:14.791882  189215 notify.go:220] Checking for updates...
	I1008 22:57:14.797701  189215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:57:14.800574  189215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:14.803453  189215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:57:14.806361  189215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:57:14.809186  189215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:57:14.812556  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:14.813125  189215 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:57:14.841927  189215 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:57:14.842105  189215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:57:14.898169  189215 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:57:14.888828193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:57:14.898273  189215 docker.go:318] overlay module found
	I1008 22:57:14.901448  189215 out.go:179] * Using the docker driver based on existing profile
	I1008 22:57:14.904243  189215 start.go:305] selected driver: docker
	I1008 22:57:14.904260  189215 start.go:925] validating driver "docker" against &{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:14.904383  189215 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:57:14.905115  189215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:57:14.957085  189215 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:57:14.948430473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:57:14.957449  189215 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:57:14.957477  189215 cni.go:84] Creating CNI manager for ""
	I1008 22:57:14.957535  189215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:57:14.957581  189215 start.go:349] cluster config:
	{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:14.960867  189215 out.go:179] * Starting "no-preload-939665" primary control-plane node in "no-preload-939665" cluster
	I1008 22:57:14.963850  189215 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:57:14.966939  189215 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:57:14.969809  189215 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:57:14.969897  189215 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:57:14.969958  189215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:57:14.970334  189215 cache.go:107] acquiring lock: {Name:mk344f5adac59ef32f6d69c009b0f8ec87052611 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970423  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1008 22:57:14.970437  189215 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.07µs
	I1008 22:57:14.970460  189215 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1008 22:57:14.970475  189215 cache.go:107] acquiring lock: {Name:mk2a1f78f7d6511aea6d634a58ed1c88718aab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970511  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1008 22:57:14.970520  189215 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 46.335µs
	I1008 22:57:14.970527  189215 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1008 22:57:14.970542  189215 cache.go:107] acquiring lock: {Name:mk7141aa7b89df55e8dad25221487d909ba46017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970574  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1008 22:57:14.970582  189215 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 40.935µs
	I1008 22:57:14.970589  189215 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1008 22:57:14.970598  189215 cache.go:107] acquiring lock: {Name:mk49b6b290192d16491277897c30c50e3badc30b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970628  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1008 22:57:14.970638  189215 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.501µs
	I1008 22:57:14.970644  189215 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1008 22:57:14.970653  189215 cache.go:107] acquiring lock: {Name:mka3f9c49147e0e292b0cfd3d6255817b177ac9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970685  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1008 22:57:14.970695  189215 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.691µs
	I1008 22:57:14.970701  189215 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1008 22:57:14.970713  189215 cache.go:107] acquiring lock: {Name:mk85b30d8a79adbfa59b06c1c836919be1606fc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970744  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1008 22:57:14.970753  189215 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 43.012µs
	I1008 22:57:14.970759  189215 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1008 22:57:14.970774  189215 cache.go:107] acquiring lock: {Name:mka1ae807285591bb895528e804cb6d37d5af28f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970800  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1008 22:57:14.970809  189215 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.046µs
	I1008 22:57:14.970815  189215 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1008 22:57:14.970825  189215 cache.go:107] acquiring lock: {Name:mk61bfc3bad4ca73036eaa8d93cb87fd5c241083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970863  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1008 22:57:14.970873  189215 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 50.766µs
	I1008 22:57:14.970880  189215 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1008 22:57:14.970886  189215 cache.go:87] Successfully saved all images to host disk.
	I1008 22:57:14.990397  189215 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:57:14.990422  189215 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:57:14.990442  189215 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:57:14.990471  189215 start.go:360] acquireMachinesLock for no-preload-939665: {Name:mk60e1980ef0e273f848717956362180f47a8fab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.990555  189215 start.go:364] duration metric: took 63.353µs to acquireMachinesLock for "no-preload-939665"
	I1008 22:57:14.990584  189215 start.go:96] Skipping create...Using existing machine configuration
	I1008 22:57:14.990607  189215 fix.go:54] fixHost starting: 
	I1008 22:57:14.990890  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:15.009848  189215 fix.go:112] recreateIfNeeded on no-preload-939665: state=Stopped err=<nil>
	W1008 22:57:15.009885  189215 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 22:57:15.013952  189215 out.go:252] * Restarting existing docker container for "no-preload-939665" ...
	I1008 22:57:15.014066  189215 cli_runner.go:164] Run: docker start no-preload-939665
	I1008 22:57:15.284522  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:15.311136  189215 kic.go:430] container "no-preload-939665" state is running.
	I1008 22:57:15.311522  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:15.331603  189215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:57:15.331823  189215 machine.go:93] provisionDockerMachine start ...
	I1008 22:57:15.331882  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:15.351588  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:15.351896  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:15.351905  189215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:57:15.352659  189215 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:57:18.497516  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:57:18.497540  189215 ubuntu.go:182] provisioning hostname "no-preload-939665"
	I1008 22:57:18.497652  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:18.515142  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:18.515455  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:18.515473  189215 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-939665 && echo "no-preload-939665" | sudo tee /etc/hostname
	I1008 22:57:18.671631  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:57:18.671704  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:18.689144  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:18.689488  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:18.689514  189215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-939665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-939665/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-939665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:57:18.833913  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:57:18.833983  189215 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:57:18.834024  189215 ubuntu.go:190] setting up certificates
	I1008 22:57:18.834042  189215 provision.go:84] configureAuth start
	I1008 22:57:18.834106  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:18.854660  189215 provision.go:143] copyHostCerts
	I1008 22:57:18.854730  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:57:18.854749  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:57:18.854844  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:57:18.854950  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:57:18.854967  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:57:18.855004  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:57:18.855062  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:57:18.855073  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:57:18.855099  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:57:18.855154  189215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.no-preload-939665 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-939665]
	I1008 22:57:19.066124  189215 provision.go:177] copyRemoteCerts
	I1008 22:57:19.066188  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:57:19.066228  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.084957  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.185272  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:57:19.204144  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:57:19.221907  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 22:57:19.239403  189215 provision.go:87] duration metric: took 405.337994ms to configureAuth
	I1008 22:57:19.239432  189215 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:57:19.239668  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:19.239788  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.259287  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:19.259598  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:19.259621  189215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:57:19.574850  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:57:19.574879  189215 machine.go:96] duration metric: took 4.243046683s to provisionDockerMachine
	I1008 22:57:19.574890  189215 start.go:293] postStartSetup for "no-preload-939665" (driver="docker")
	I1008 22:57:19.574901  189215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:57:19.574971  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:57:19.575015  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.593115  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.694140  189215 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:57:19.697805  189215 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:57:19.697837  189215 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:57:19.697849  189215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:57:19.697903  189215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:57:19.697993  189215 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:57:19.698106  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:57:19.706223  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:57:19.724098  189215 start.go:296] duration metric: took 149.193283ms for postStartSetup
	I1008 22:57:19.724176  189215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:57:19.724236  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.742535  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.842716  189215 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:57:19.847703  189215 fix.go:56] duration metric: took 4.857097744s for fixHost
	I1008 22:57:19.847773  189215 start.go:83] releasing machines lock for "no-preload-939665", held for 4.857203623s
	I1008 22:57:19.847881  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:19.865178  189215 ssh_runner.go:195] Run: cat /version.json
	I1008 22:57:19.865223  189215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:57:19.865233  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.865286  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.885811  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.891213  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:20.088495  189215 ssh_runner.go:195] Run: systemctl --version
	I1008 22:57:20.095529  189215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:57:20.132456  189215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:57:20.137397  189215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:57:20.137500  189215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:57:20.146025  189215 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 22:57:20.146049  189215 start.go:495] detecting cgroup driver to use...
	I1008 22:57:20.146113  189215 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:57:20.146179  189215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:57:20.161810  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:57:20.175319  189215 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:57:20.175421  189215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:57:20.191090  189215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:57:20.204457  189215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:57:20.315736  189215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:57:20.440129  189215 docker.go:234] disabling docker service ...
	I1008 22:57:20.440216  189215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:57:20.455361  189215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:57:20.469076  189215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:57:20.586412  189215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:57:20.706047  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:57:20.718719  189215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:57:20.732049  189215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:57:20.732141  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.740752  189215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:57:20.740813  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.749357  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.758257  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.767201  189215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:57:20.775190  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.783656  189215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.791696  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.800386  189215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:57:20.808060  189215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:57:20.815631  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:20.925930  189215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:57:21.067006  189215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:57:21.067071  189215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:57:21.070804  189215 start.go:563] Will wait 60s for crictl version
	I1008 22:57:21.070866  189215 ssh_runner.go:195] Run: which crictl
	I1008 22:57:21.074187  189215 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:57:21.098882  189215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:57:21.099037  189215 ssh_runner.go:195] Run: crio --version
	I1008 22:57:21.129152  189215 ssh_runner.go:195] Run: crio --version
	I1008 22:57:21.159678  189215 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:57:21.162526  189215 cli_runner.go:164] Run: docker network inspect no-preload-939665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:57:21.182847  189215 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:57:21.186792  189215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:57:21.196569  189215 kubeadm.go:883] updating cluster {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:57:21.196696  189215 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:57:21.196743  189215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:57:21.234573  189215 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:57:21.234598  189215 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:57:21.234606  189215 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 22:57:21.234750  189215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-939665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:57:21.234830  189215 ssh_runner.go:195] Run: crio config
	I1008 22:57:21.292904  189215 cni.go:84] Creating CNI manager for ""
	I1008 22:57:21.292932  189215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:57:21.292950  189215 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:57:21.292972  189215 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-939665 NodeName:no-preload-939665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:57:21.293101  189215 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-939665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:57:21.293173  189215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:57:21.301074  189215 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:57:21.301163  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:57:21.308677  189215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 22:57:21.321204  189215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:57:21.333547  189215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1008 22:57:21.346162  189215 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:57:21.350364  189215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:57:21.360170  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:21.467099  189215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:57:21.481987  189215 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665 for IP: 192.168.85.2
	I1008 22:57:21.482060  189215 certs.go:195] generating shared ca certs ...
	I1008 22:57:21.482092  189215 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:21.482258  189215 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:57:21.482339  189215 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:57:21.482373  189215 certs.go:257] generating profile certs ...
	I1008 22:57:21.482513  189215 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.key
	I1008 22:57:21.482622  189215 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key.108ea954
	I1008 22:57:21.482693  189215 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key
	I1008 22:57:21.482836  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:57:21.482893  189215 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:57:21.482922  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:57:21.482982  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:57:21.483035  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:57:21.483093  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:57:21.483163  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:57:21.483813  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:57:21.502778  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:57:21.520733  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:57:21.537842  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:57:21.559178  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 22:57:21.579614  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:57:21.600183  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:57:21.622833  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:57:21.643796  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:57:21.664903  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:57:21.687300  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:57:21.707757  189215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:57:21.721225  189215 ssh_runner.go:195] Run: openssl version
	I1008 22:57:21.727932  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:57:21.736231  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.740177  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.740254  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.787097  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:57:21.794806  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:57:21.802792  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.809303  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.809402  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.851934  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:57:21.860228  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:57:21.868365  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.872140  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.872222  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.913723  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:57:21.921382  189215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:57:21.925115  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 22:57:21.966240  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 22:57:22.008960  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 22:57:22.050751  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 22:57:22.105518  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 22:57:22.176820  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 22:57:22.232949  189215 kubeadm.go:400] StartCluster: {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:22.233035  189215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:57:22.233090  189215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:57:22.302288  189215 cri.go:89] found id: "22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3"
	I1008 22:57:22.302311  189215 cri.go:89] found id: "f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d"
	I1008 22:57:22.302317  189215 cri.go:89] found id: "e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e"
	I1008 22:57:22.302331  189215 cri.go:89] found id: "fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee"
	I1008 22:57:22.302335  189215 cri.go:89] found id: ""
	I1008 22:57:22.302398  189215 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 22:57:22.323705  189215 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:57:22.323795  189215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:57:22.336052  189215 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 22:57:22.336070  189215 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 22:57:22.336119  189215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 22:57:22.351401  189215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:57:22.351898  189215 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-939665" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:22.352007  189215 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-939665" cluster setting kubeconfig missing "no-preload-939665" context setting]
	I1008 22:57:22.352298  189215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.353574  189215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 22:57:22.366548  189215 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 22:57:22.366580  189215 kubeadm.go:601] duration metric: took 30.503126ms to restartPrimaryControlPlane
	I1008 22:57:22.366591  189215 kubeadm.go:402] duration metric: took 133.650455ms to StartCluster
	I1008 22:57:22.366606  189215 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.366672  189215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:22.367360  189215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.367593  189215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:57:22.367913  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:22.367964  189215 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:57:22.368030  189215 addons.go:69] Setting storage-provisioner=true in profile "no-preload-939665"
	I1008 22:57:22.368057  189215 addons.go:238] Setting addon storage-provisioner=true in "no-preload-939665"
	W1008 22:57:22.368070  189215 addons.go:247] addon storage-provisioner should already be in state true
	I1008 22:57:22.368095  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.368651  189215 addons.go:69] Setting dashboard=true in profile "no-preload-939665"
	I1008 22:57:22.368675  189215 addons.go:238] Setting addon dashboard=true in "no-preload-939665"
	W1008 22:57:22.368687  189215 addons.go:247] addon dashboard should already be in state true
	I1008 22:57:22.368707  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.369171  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.369483  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.369909  189215 addons.go:69] Setting default-storageclass=true in profile "no-preload-939665"
	I1008 22:57:22.369931  189215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-939665"
	I1008 22:57:22.370206  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.373239  189215 out.go:179] * Verifying Kubernetes components...
	I1008 22:57:22.376522  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:22.435767  189215 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 22:57:22.435850  189215 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:57:22.437591  189215 addons.go:238] Setting addon default-storageclass=true in "no-preload-939665"
	W1008 22:57:22.437613  189215 addons.go:247] addon default-storageclass should already be in state true
	I1008 22:57:22.437660  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.438223  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.438823  189215 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:57:22.438847  189215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:57:22.438905  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.442056  189215 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 22:57:22.453816  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 22:57:22.453845  189215 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 22:57:22.453928  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.489201  189215 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:57:22.489223  189215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:57:22.489292  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.493728  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.506546  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.526057  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.731621  189215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:57:22.754398  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:57:22.802907  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 22:57:22.802933  189215 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 22:57:22.820730  189215 node_ready.go:35] waiting up to 6m0s for node "no-preload-939665" to be "Ready" ...
	I1008 22:57:22.834337  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:57:22.867847  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 22:57:22.867873  189215 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 22:57:22.958881  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 22:57:22.959044  189215 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 22:57:23.009828  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 22:57:23.009895  189215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 22:57:23.051687  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 22:57:23.051760  189215 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 22:57:23.078571  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 22:57:23.078656  189215 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 22:57:23.095149  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 22:57:23.095223  189215 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 22:57:23.110652  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 22:57:23.110724  189215 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 22:57:23.146912  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:57:23.146978  189215 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 22:57:23.172565  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:57:26.428523  189215 node_ready.go:49] node "no-preload-939665" is "Ready"
	I1008 22:57:26.428555  189215 node_ready.go:38] duration metric: took 3.607792114s for node "no-preload-939665" to be "Ready" ...
	I1008 22:57:26.428570  189215 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:57:26.428661  189215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:57:26.609530  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.855058214s)
	I1008 22:57:27.790649  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.956230897s)
	I1008 22:57:27.790861  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.618214083s)
	I1008 22:57:27.791096  189215 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.362417244s)
	I1008 22:57:27.791160  189215 api_server.go:72] duration metric: took 5.423535251s to wait for apiserver process to appear ...
	I1008 22:57:27.791181  189215 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:57:27.791226  189215 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:57:27.794370  189215 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-939665 addons enable metrics-server
	
	I1008 22:57:27.797309  189215 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1008 22:57:27.800172  189215 addons.go:514] duration metric: took 5.432173441s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1008 22:57:27.808726  189215 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 22:57:27.808753  189215 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 22:57:28.291323  189215 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:57:28.299370  189215 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 22:57:28.300462  189215 api_server.go:141] control plane version: v1.34.1
	I1008 22:57:28.300485  189215 api_server.go:131] duration metric: took 509.284275ms to wait for apiserver health ...
	I1008 22:57:28.300495  189215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:57:28.303496  189215 system_pods.go:59] 8 kube-system pods found
	I1008 22:57:28.303535  189215 system_pods.go:61] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:57:28.303567  189215 system_pods.go:61] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:57:28.303578  189215 system_pods.go:61] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:57:28.303587  189215 system_pods.go:61] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:57:28.303604  189215 system_pods.go:61] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:57:28.303610  189215 system_pods.go:61] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:57:28.303617  189215 system_pods.go:61] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:57:28.303636  189215 system_pods.go:61] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Running
	I1008 22:57:28.303647  189215 system_pods.go:74] duration metric: took 3.14283ms to wait for pod list to return data ...
	I1008 22:57:28.303663  189215 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:57:28.306192  189215 default_sa.go:45] found service account: "default"
	I1008 22:57:28.306220  189215 default_sa.go:55] duration metric: took 2.550603ms for default service account to be created ...
	I1008 22:57:28.306230  189215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:57:28.308825  189215 system_pods.go:86] 8 kube-system pods found
	I1008 22:57:28.308858  189215 system_pods.go:89] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:57:28.308868  189215 system_pods.go:89] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:57:28.308874  189215 system_pods.go:89] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:57:28.308881  189215 system_pods.go:89] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:57:28.308888  189215 system_pods.go:89] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:57:28.308892  189215 system_pods.go:89] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:57:28.308899  189215 system_pods.go:89] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:57:28.308909  189215 system_pods.go:89] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Running
	I1008 22:57:28.308915  189215 system_pods.go:126] duration metric: took 2.680204ms to wait for k8s-apps to be running ...
	I1008 22:57:28.308929  189215 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:57:28.308984  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:57:28.322889  189215 system_svc.go:56] duration metric: took 13.951449ms WaitForService to wait for kubelet
	I1008 22:57:28.322918  189215 kubeadm.go:586] duration metric: took 5.955290813s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:57:28.322958  189215 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:57:28.328387  189215 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:57:28.328425  189215 node_conditions.go:123] node cpu capacity is 2
	I1008 22:57:28.328438  189215 node_conditions.go:105] duration metric: took 5.467412ms to run NodePressure ...
	I1008 22:57:28.328451  189215 start.go:241] waiting for startup goroutines ...
	I1008 22:57:28.328458  189215 start.go:246] waiting for cluster config update ...
	I1008 22:57:28.328473  189215 start.go:255] writing updated cluster config ...
	I1008 22:57:28.328760  189215 ssh_runner.go:195] Run: rm -f paused
	I1008 22:57:28.332532  189215 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:57:28.336688  189215 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 22:57:30.350864  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:32.843064  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:34.844285  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:36.845168  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:39.344059  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:41.842645  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:43.843301  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:46.342737  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:48.842335  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:50.842730  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:52.842806  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:55.342860  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:57.844337  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:58:00.353944  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	I1008 22:58:01.343135  189215 pod_ready.go:94] pod "coredns-66bc5c9577-wj8wf" is "Ready"
	I1008 22:58:01.343163  189215 pod_ready.go:86] duration metric: took 33.006442095s for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.346159  189215 pod_ready.go:83] waiting for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.350999  189215 pod_ready.go:94] pod "etcd-no-preload-939665" is "Ready"
	I1008 22:58:01.351028  189215 pod_ready.go:86] duration metric: took 4.841796ms for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.353471  189215 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.358536  189215 pod_ready.go:94] pod "kube-apiserver-no-preload-939665" is "Ready"
	I1008 22:58:01.358567  189215 pod_ready.go:86] duration metric: took 5.065093ms for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.361323  189215 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.541059  189215 pod_ready.go:94] pod "kube-controller-manager-no-preload-939665" is "Ready"
	I1008 22:58:01.541090  189215 pod_ready.go:86] duration metric: took 179.740333ms for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.741235  189215 pod_ready.go:83] waiting for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.141610  189215 pod_ready.go:94] pod "kube-proxy-77lvp" is "Ready"
	I1008 22:58:02.141660  189215 pod_ready.go:86] duration metric: took 400.391388ms for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.340814  189215 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.741219  189215 pod_ready.go:94] pod "kube-scheduler-no-preload-939665" is "Ready"
	I1008 22:58:02.741265  189215 pod_ready.go:86] duration metric: took 400.423027ms for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.741278  189215 pod_ready.go:40] duration metric: took 34.408667436s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:58:02.798065  189215 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:58:02.802090  189215 out.go:179] * Done! kubectl is now configured to use "no-preload-939665" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.888125718Z" level=info msg="Removing container: d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7" id=5154abab-bafb-4ab3-8863-bce272cb72f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.904959072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.905236137Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a14d0df2d057639936f6018c94d29381c1e6bf90c5dfd94d2a0bfe136c515c75/merged/etc/passwd: no such file or directory"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.905278903Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a14d0df2d057639936f6018c94d29381c1e6bf90c5dfd94d2a0bfe136c515c75/merged/etc/group: no such file or directory"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.906539865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.915999086Z" level=info msg="Error loading conmon cgroup of container d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7: cgroup deleted" id=5154abab-bafb-4ab3-8863-bce272cb72f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.927572554Z" level=info msg="Removed container d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs/dashboard-metrics-scraper" id=5154abab-bafb-4ab3-8863-bce272cb72f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.938080747Z" level=info msg="Created container 92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20: kube-system/storage-provisioner/storage-provisioner" id=2f07cf73-baa6-49f7-8a35-4ea34cd4708d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.939146909Z" level=info msg="Starting container: 92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20" id=a15b8902-f347-438c-b025-59703d9df8c1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.942544249Z" level=info msg="Started container" PID=1629 containerID=92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20 description=kube-system/storage-provisioner/storage-provisioner id=a15b8902-f347-438c-b025-59703d9df8c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a2e9aa746b8090e32353a12b0a2f1252ac263167a34631fae4d71ef6f6d254ed
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.506135373Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.512555181Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.512591834Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.512614792Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.515933042Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.515966782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.515993457Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.519723601Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.51975602Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.519779905Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.522819145Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.522852729Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.522874785Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.525923838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.525958571Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	92514c9dbe0b3       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   a2e9aa746b809       storage-provisioner                          kube-system
	8a83632a73b79       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   421ea0f5d2c6e       dashboard-metrics-scraper-6ffb444bf9-cz2qs   kubernetes-dashboard
	156ae21a19158       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   b8f42166df0ab       kubernetes-dashboard-855c9754f9-f6ktf        kubernetes-dashboard
	bade74eb19946       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   802c77776e892       coredns-66bc5c9577-wj8wf                     kube-system
	e863c168a0e85       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   f8ec2a06eefb5       busybox                                      default
	c0d2286c0fb19       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   201c7dea97bcf       kube-proxy-77lvp                             kube-system
	c28c75461cf86       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago      Exited              storage-provisioner         1                   a2e9aa746b809       storage-provisioner                          kube-system
	1099fb7bc0b5a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   bc4286483b119       kindnet-dhln4                                kube-system
	22fc15165b261       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           55 seconds ago      Running             kube-scheduler              1                   2fc82623bdfc2       kube-scheduler-no-preload-939665             kube-system
	f8d8050a525b6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           55 seconds ago      Running             etcd                        1                   c6a3eb9c84141       etcd-no-preload-939665                       kube-system
	e70ea0acf9870       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           55 seconds ago      Running             kube-controller-manager     1                   519f888d62fb3       kube-controller-manager-no-preload-939665    kube-system
	fab90393033f5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           55 seconds ago      Running             kube-apiserver              1                   f18b7611d22c4       kube-apiserver-no-preload-939665             kube-system
	
	
	==> coredns [bade74eb19946af21f5ffbfb4ffa4e7f81bb41043453f2dca89df500be9f1376] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46218 - 2562 "HINFO IN 198312727653361217.8862779728426046954. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01928423s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-939665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-939665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=no-preload-939665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_56_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:56:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-939665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:58:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-939665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 529d733e216047a8a089b00ae851c5b5
	  System UUID:                bdda0eaf-05ab-4058-9e68-44ec4f323643
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-wj8wf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     102s
	  kube-system                 etcd-no-preload-939665                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         107s
	  kube-system                 kindnet-dhln4                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-no-preload-939665              250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-no-preload-939665     200m (10%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-77lvp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-no-preload-939665              100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cz2qs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f6ktf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 101s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  108s                 kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 108s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    108s                 kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     108s                 kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   Starting                 108s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           103s                 node-controller  Node no-preload-939665 event: Registered Node no-preload-939665 in Controller
	  Normal   NodeReady                89s                  kubelet          Node no-preload-939665 status is now: NodeReady
	  Normal   Starting                 56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                  node-controller  Node no-preload-939665 event: Registered Node no-preload-939665 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d] <==
	{"level":"warn","ts":"2025-10-08T22:57:24.747685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.767213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.783148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.806919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.826575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.843854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.856157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.877532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.890641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.947615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.987460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.019194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.057998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.099381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.127379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.158378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.195906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.222963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.278759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.317993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.379981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.432475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.458119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.478375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.558356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53690","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:58:18 up  1:40,  0 user,  load average: 1.51, 1.47, 1.66
	Linux no-preload-939665 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1099fb7bc0b5a6a715edc1ae2c1822b4f424b055875ea1147123708dbca0e939] <==
	I1008 22:57:27.307242       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:57:27.307882       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 22:57:27.308148       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:57:27.308196       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:57:27.308237       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:57:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:57:27.504081       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:57:27.504155       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:57:27.504188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:57:27.505083       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:57:57.505179       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 22:57:57.505191       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:57:57.505299       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:57:57.505406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 22:57:59.104769       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:57:59.104825       1 metrics.go:72] Registering metrics
	I1008 22:57:59.104887       1 controller.go:711] "Syncing nftables rules"
	I1008 22:58:07.505737       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:58:07.505828       1 main.go:301] handling current node
	I1008 22:58:17.512584       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:58:17.512615       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee] <==
	I1008 22:57:26.462144       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 22:57:26.500545       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 22:57:26.500581       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 22:57:26.500698       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 22:57:26.500794       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 22:57:26.500842       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 22:57:26.511810       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 22:57:26.511984       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1008 22:57:26.512000       1 policy_source.go:240] refreshing policies
	I1008 22:57:26.512173       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:57:26.520740       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 22:57:26.526948       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 22:57:26.527017       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 22:57:26.558361       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:57:26.764335       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:57:27.133150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:57:27.446595       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 22:57:27.515076       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 22:57:27.560478       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:57:27.578397       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:57:27.675445       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.246.240"}
	I1008 22:57:27.726414       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.10.127"}
	I1008 22:57:29.735881       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:57:29.976499       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:57:30.179378       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e] <==
	I1008 22:57:29.734616       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1008 22:57:29.736754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:57:29.739227       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:57:29.741656       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 22:57:29.743892       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 22:57:29.745066       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:57:29.764901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:57:29.769915       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:57:29.770011       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:57:29.770052       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1008 22:57:29.770098       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 22:57:29.770279       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 22:57:29.770333       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 22:57:29.770370       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 22:57:29.770547       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-939665"
	I1008 22:57:29.770599       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 22:57:29.771023       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 22:57:29.771242       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1008 22:57:29.771300       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 22:57:29.782562       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 22:57:29.787906       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 22:57:29.802356       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:57:29.802383       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:57:29.802391       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:57:29.808956       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c0d2286c0fb19de49b39e27723286f23f37dd0279a1348cf94a2b65a52a99273] <==
	I1008 22:57:27.367399       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:57:27.567344       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:57:27.668299       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:57:27.668341       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 22:57:27.669220       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:57:27.772927       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:57:27.772988       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:57:27.798866       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:57:27.799219       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:57:27.799419       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:57:27.800675       1 config.go:200] "Starting service config controller"
	I1008 22:57:27.800753       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:57:27.800808       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:57:27.800838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:57:27.800899       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:57:27.800927       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:57:27.801582       1 config.go:309] "Starting node config controller"
	I1008 22:57:27.801794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:57:27.801845       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:57:27.901244       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 22:57:27.901287       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:57:27.901256       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3] <==
	I1008 22:57:23.316242       1 serving.go:386] Generated self-signed cert in-memory
	I1008 22:57:26.529545       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 22:57:26.529578       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:57:26.544688       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 22:57:26.544724       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 22:57:26.544763       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:57:26.544770       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:57:26.544784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:57:26.544791       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:57:26.545882       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 22:57:26.546122       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 22:57:26.648011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:57:26.648082       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 22:57:26.648184       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.567793     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9tq\" (UniqueName: \"kubernetes.io/projected/ed4722e2-72aa-4561-81bb-11312618fca8-kube-api-access-fg9tq\") pod \"kubernetes-dashboard-855c9754f9-f6ktf\" (UID: \"ed4722e2-72aa-4561-81bb-11312618fca8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6ktf"
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.567820     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed4722e2-72aa-4561-81bb-11312618fca8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-f6ktf\" (UID: \"ed4722e2-72aa-4561-81bb-11312618fca8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6ktf"
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.567841     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7f63cecd-fc6f-4f13-a5f1-d2a083f5417a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-cz2qs\" (UID: \"7f63cecd-fc6f-4f13-a5f1-d2a083f5417a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs"
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: W1008 22:57:30.727219     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-b8f42166df0ab7f85a036b1aad8047caa5406bcb98bd62fc0edf3db4d9185542 WatchSource:0}: Error finding container b8f42166df0ab7f85a036b1aad8047caa5406bcb98bd62fc0edf3db4d9185542: Status 404 returned error can't find the container with id b8f42166df0ab7f85a036b1aad8047caa5406bcb98bd62fc0edf3db4d9185542
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: W1008 22:57:30.730933     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-421ea0f5d2c6ebb976b5f34e759416a5c06a4b8bb32c93ad4392b7a77fa7a9aa WatchSource:0}: Error finding container 421ea0f5d2c6ebb976b5f34e759416a5c06a4b8bb32c93ad4392b7a77fa7a9aa: Status 404 returned error can't find the container with id 421ea0f5d2c6ebb976b5f34e759416a5c06a4b8bb32c93ad4392b7a77fa7a9aa
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.914280     765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 08 22:57:35 no-preload-939665 kubelet[765]: I1008 22:57:35.831699     765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6ktf" podStartSLOduration=1.2311345949999999 podStartE2EDuration="5.831679966s" podCreationTimestamp="2025-10-08 22:57:30 +0000 UTC" firstStartedPulling="2025-10-08 22:57:30.731119284 +0000 UTC m=+9.247907278" lastFinishedPulling="2025-10-08 22:57:35.331664664 +0000 UTC m=+13.848452649" observedRunningTime="2025-10-08 22:57:35.831390633 +0000 UTC m=+14.348178651" watchObservedRunningTime="2025-10-08 22:57:35.831679966 +0000 UTC m=+14.348467951"
	Oct 08 22:57:39 no-preload-939665 kubelet[765]: I1008 22:57:39.827839     765 scope.go:117] "RemoveContainer" containerID="70fca093a03ca4d0baa22b2a30aba9f2b2478ea60950940fedce5c9b4f3def00"
	Oct 08 22:57:40 no-preload-939665 kubelet[765]: I1008 22:57:40.832484     765 scope.go:117] "RemoveContainer" containerID="70fca093a03ca4d0baa22b2a30aba9f2b2478ea60950940fedce5c9b4f3def00"
	Oct 08 22:57:40 no-preload-939665 kubelet[765]: I1008 22:57:40.832864     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:40 no-preload-939665 kubelet[765]: E1008 22:57:40.833038     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:57:41 no-preload-939665 kubelet[765]: I1008 22:57:41.837121     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:41 no-preload-939665 kubelet[765]: E1008 22:57:41.842990     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:57:43 no-preload-939665 kubelet[765]: I1008 22:57:43.141364     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:43 no-preload-939665 kubelet[765]: E1008 22:57:43.141573     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.657625     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.875131     765 scope.go:117] "RemoveContainer" containerID="c28c75461cf867bdf283e13c269bfe255b9c7fc15ced477eb8b068c032bc4178"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.885268     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.885605     765 scope.go:117] "RemoveContainer" containerID="8a83632a73b7920e80de176c3a5ba53ba3266776a89382be87f4612c3f712fe1"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: E1008 22:57:57.885909     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:58:03 no-preload-939665 kubelet[765]: I1008 22:58:03.145568     765 scope.go:117] "RemoveContainer" containerID="8a83632a73b7920e80de176c3a5ba53ba3266776a89382be87f4612c3f712fe1"
	Oct 08 22:58:03 no-preload-939665 kubelet[765]: E1008 22:58:03.146559     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:58:14 no-preload-939665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 22:58:15 no-preload-939665 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 22:58:15 no-preload-939665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [156ae21a191583af601f44668a0ae6339b9eb2752a19bb2691e28827eb9f58b2] <==
	2025/10/08 22:57:35 Starting overwatch
	2025/10/08 22:57:35 Using namespace: kubernetes-dashboard
	2025/10/08 22:57:35 Using in-cluster config to connect to apiserver
	2025/10/08 22:57:35 Using secret token for csrf signing
	2025/10/08 22:57:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 22:57:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 22:57:35 Successful initial request to the apiserver, version: v1.34.1
	2025/10/08 22:57:35 Generating JWE encryption key
	2025/10/08 22:57:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 22:57:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 22:57:35 Initializing JWE encryption key from synchronized object
	2025/10/08 22:57:35 Creating in-cluster Sidecar client
	2025/10/08 22:57:35 Serving insecurely on HTTP port: 9090
	2025/10/08 22:57:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 22:58:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20] <==
	I1008 22:57:57.959763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:57:57.972319       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:57:57.972446       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 22:57:57.976261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:01.431814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:05.691802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:09.289743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:12.343723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:15.366700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:15.374558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:58:15.374703       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:58:15.374848       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-939665_39053aaa-5595-41ac-835d-a61b6438acc8!
	I1008 22:58:15.375777       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d7db233-93f3-4724-94fd-ba2ce2cb320c", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-939665_39053aaa-5595-41ac-835d-a61b6438acc8 became leader
	W1008 22:58:15.382031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:15.387792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:58:15.476454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-939665_39053aaa-5595-41ac-835d-a61b6438acc8!
	W1008 22:58:17.391060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:17.398776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c28c75461cf867bdf283e13c269bfe255b9c7fc15ced477eb8b068c032bc4178] <==
	I1008 22:57:27.371437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 22:57:57.379000       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-939665 -n no-preload-939665
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-939665 -n no-preload-939665: exit status 2 (555.243677ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-939665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-939665
helpers_test.go:243: (dbg) docker inspect no-preload-939665:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4",
	        "Created": "2025-10-08T22:55:51.376878504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 189343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:57:15.052934733Z",
	            "FinishedAt": "2025-10-08T22:57:14.257504308Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/hostname",
	        "HostsPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/hosts",
	        "LogPath": "/var/lib/docker/containers/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4-json.log",
	        "Name": "/no-preload-939665",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-939665:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-939665",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4",
	                "LowerDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/414105168e1b3a8bc6b746e9085229ee05c13f5f3658ae11d4a62b11a71660d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-939665",
	                "Source": "/var/lib/docker/volumes/no-preload-939665/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-939665",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-939665",
	                "name.minikube.sigs.k8s.io": "no-preload-939665",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "100e573ca2d387da7af8696d7655863318af52f4290b1916df5dad80e070430d",
	            "SandboxKey": "/var/run/docker/netns/100e573ca2d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-939665": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:52:42:85:45:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc660108ce7e425dc8ccc8b9b4c79d2e7285488dbd4605c4f5b483d992fc9478",
	                    "EndpointID": "e38b20ba2334e3c440dd7b4ea47346b3a72a6ccc7a6529503100457228ab0831",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-939665",
	                        "28f143a4ef4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665: exit status 2 (473.561754ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-939665 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-939665 logs -n 25: (1.621206672s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cert-expiration-292528                                                                                                                                                                                                                     │ cert-expiration-292528    │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │ 08 Oct 25 22:49 UTC │
	│ start   │ -p force-systemd-flag-385382 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:49 UTC │                     │
	│ delete  │ -p force-systemd-env-092546                                                                                                                                                                                                                   │ force-systemd-env-092546  │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:52 UTC │
	│ start   │ -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:52 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ cert-options-378019 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ ssh     │ -p cert-options-378019 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407    │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                                                                                                    │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-939665         │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                                                                                                  │ force-systemd-flag-385382 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:57:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:57:14.782613  189215 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:57:14.782899  189215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:57:14.782916  189215 out.go:374] Setting ErrFile to fd 2...
	I1008 22:57:14.782922  189215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:57:14.783293  189215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:57:14.783741  189215 out.go:368] Setting JSON to false
	I1008 22:57:14.784656  189215 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5985,"bootTime":1759958250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:57:14.784745  189215 start.go:141] virtualization:  
	I1008 22:57:14.787916  189215 out.go:179] * [no-preload-939665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:57:14.791714  189215 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:57:14.791882  189215 notify.go:220] Checking for updates...
	I1008 22:57:14.797701  189215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:57:14.800574  189215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:14.803453  189215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:57:14.806361  189215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:57:14.809186  189215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:57:14.812556  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:14.813125  189215 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:57:14.841927  189215 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:57:14.842105  189215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:57:14.898169  189215 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:57:14.888828193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:57:14.898273  189215 docker.go:318] overlay module found
	I1008 22:57:14.901448  189215 out.go:179] * Using the docker driver based on existing profile
	I1008 22:57:14.904243  189215 start.go:305] selected driver: docker
	I1008 22:57:14.904260  189215 start.go:925] validating driver "docker" against &{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:14.904383  189215 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:57:14.905115  189215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:57:14.957085  189215 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:57:14.948430473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:57:14.957449  189215 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:57:14.957477  189215 cni.go:84] Creating CNI manager for ""
	I1008 22:57:14.957535  189215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:57:14.957581  189215 start.go:349] cluster config:
	{Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:14.960867  189215 out.go:179] * Starting "no-preload-939665" primary control-plane node in "no-preload-939665" cluster
	I1008 22:57:14.963850  189215 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:57:14.966939  189215 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:57:14.969809  189215 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:57:14.969897  189215 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:57:14.969958  189215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:57:14.970334  189215 cache.go:107] acquiring lock: {Name:mk344f5adac59ef32f6d69c009b0f8ec87052611 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970423  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1008 22:57:14.970437  189215 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 123.07µs
	I1008 22:57:14.970460  189215 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1008 22:57:14.970475  189215 cache.go:107] acquiring lock: {Name:mk2a1f78f7d6511aea6d634a58ed1c88718aab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970511  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1008 22:57:14.970520  189215 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 46.335µs
	I1008 22:57:14.970527  189215 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1008 22:57:14.970542  189215 cache.go:107] acquiring lock: {Name:mk7141aa7b89df55e8dad25221487d909ba46017 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970574  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1008 22:57:14.970582  189215 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 40.935µs
	I1008 22:57:14.970589  189215 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1008 22:57:14.970598  189215 cache.go:107] acquiring lock: {Name:mk49b6b290192d16491277897c30c50e3badc30b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970628  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1008 22:57:14.970638  189215 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.501µs
	I1008 22:57:14.970644  189215 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1008 22:57:14.970653  189215 cache.go:107] acquiring lock: {Name:mka3f9c49147e0e292b0cfd3d6255817b177ac9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970685  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1008 22:57:14.970695  189215 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.691µs
	I1008 22:57:14.970701  189215 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1008 22:57:14.970713  189215 cache.go:107] acquiring lock: {Name:mk85b30d8a79adbfa59b06c1c836919be1606fc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970744  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1008 22:57:14.970753  189215 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 43.012µs
	I1008 22:57:14.970759  189215 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1008 22:57:14.970774  189215 cache.go:107] acquiring lock: {Name:mka1ae807285591bb895528e804cb6d37d5af28f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970800  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1008 22:57:14.970809  189215 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.046µs
	I1008 22:57:14.970815  189215 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1008 22:57:14.970825  189215 cache.go:107] acquiring lock: {Name:mk61bfc3bad4ca73036eaa8d93cb87fd5c241083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.970863  189215 cache.go:115] /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1008 22:57:14.970873  189215 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 50.766µs
	I1008 22:57:14.970880  189215 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1008 22:57:14.970886  189215 cache.go:87] Successfully saved all images to host disk.
	I1008 22:57:14.990397  189215 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:57:14.990422  189215 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:57:14.990442  189215 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:57:14.990471  189215 start.go:360] acquireMachinesLock for no-preload-939665: {Name:mk60e1980ef0e273f848717956362180f47a8fab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:57:14.990555  189215 start.go:364] duration metric: took 63.353µs to acquireMachinesLock for "no-preload-939665"
	I1008 22:57:14.990584  189215 start.go:96] Skipping create...Using existing machine configuration
	I1008 22:57:14.990607  189215 fix.go:54] fixHost starting: 
	I1008 22:57:14.990890  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:15.009848  189215 fix.go:112] recreateIfNeeded on no-preload-939665: state=Stopped err=<nil>
	W1008 22:57:15.009885  189215 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 22:57:15.013952  189215 out.go:252] * Restarting existing docker container for "no-preload-939665" ...
	I1008 22:57:15.014066  189215 cli_runner.go:164] Run: docker start no-preload-939665
	I1008 22:57:15.284522  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:15.311136  189215 kic.go:430] container "no-preload-939665" state is running.
	I1008 22:57:15.311522  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:15.331603  189215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/config.json ...
	I1008 22:57:15.331823  189215 machine.go:93] provisionDockerMachine start ...
	I1008 22:57:15.331882  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:15.351588  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:15.351896  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:15.351905  189215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:57:15.352659  189215 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:57:18.497516  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:57:18.497540  189215 ubuntu.go:182] provisioning hostname "no-preload-939665"
	I1008 22:57:18.497652  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:18.515142  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:18.515455  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:18.515473  189215 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-939665 && echo "no-preload-939665" | sudo tee /etc/hostname
	I1008 22:57:18.671631  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-939665
	
	I1008 22:57:18.671704  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:18.689144  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:18.689488  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:18.689514  189215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-939665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-939665/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-939665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:57:18.833913  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:57:18.833983  189215 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:57:18.834024  189215 ubuntu.go:190] setting up certificates
	I1008 22:57:18.834042  189215 provision.go:84] configureAuth start
	I1008 22:57:18.834106  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:18.854660  189215 provision.go:143] copyHostCerts
	I1008 22:57:18.854730  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:57:18.854749  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:57:18.854844  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:57:18.854950  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:57:18.854967  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:57:18.855004  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:57:18.855062  189215 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:57:18.855073  189215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:57:18.855099  189215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:57:18.855154  189215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.no-preload-939665 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-939665]
	I1008 22:57:19.066124  189215 provision.go:177] copyRemoteCerts
	I1008 22:57:19.066188  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:57:19.066228  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.084957  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.185272  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:57:19.204144  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:57:19.221907  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 22:57:19.239403  189215 provision.go:87] duration metric: took 405.337994ms to configureAuth
	I1008 22:57:19.239432  189215 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:57:19.239668  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:19.239788  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.259287  189215 main.go:141] libmachine: Using SSH client type: native
	I1008 22:57:19.259598  189215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33066 <nil> <nil>}
	I1008 22:57:19.259621  189215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:57:19.574850  189215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:57:19.574879  189215 machine.go:96] duration metric: took 4.243046683s to provisionDockerMachine
	I1008 22:57:19.574890  189215 start.go:293] postStartSetup for "no-preload-939665" (driver="docker")
	I1008 22:57:19.574901  189215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:57:19.574971  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:57:19.575015  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.593115  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.694140  189215 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:57:19.697805  189215 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:57:19.697837  189215 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:57:19.697849  189215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:57:19.697903  189215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:57:19.697993  189215 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:57:19.698106  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:57:19.706223  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:57:19.724098  189215 start.go:296] duration metric: took 149.193283ms for postStartSetup
	I1008 22:57:19.724176  189215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:57:19.724236  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.742535  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.842716  189215 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:57:19.847703  189215 fix.go:56] duration metric: took 4.857097744s for fixHost
	I1008 22:57:19.847773  189215 start.go:83] releasing machines lock for "no-preload-939665", held for 4.857203623s
	I1008 22:57:19.847881  189215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-939665
	I1008 22:57:19.865178  189215 ssh_runner.go:195] Run: cat /version.json
	I1008 22:57:19.865223  189215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:57:19.865233  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.865286  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:19.885811  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:19.891213  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:20.088495  189215 ssh_runner.go:195] Run: systemctl --version
	I1008 22:57:20.095529  189215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:57:20.132456  189215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:57:20.137397  189215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:57:20.137500  189215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:57:20.146025  189215 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 22:57:20.146049  189215 start.go:495] detecting cgroup driver to use...
	I1008 22:57:20.146113  189215 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:57:20.146179  189215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:57:20.161810  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:57:20.175319  189215 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:57:20.175421  189215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:57:20.191090  189215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:57:20.204457  189215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:57:20.315736  189215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:57:20.440129  189215 docker.go:234] disabling docker service ...
	I1008 22:57:20.440216  189215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:57:20.455361  189215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:57:20.469076  189215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:57:20.586412  189215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:57:20.706047  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:57:20.718719  189215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:57:20.732049  189215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:57:20.732141  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.740752  189215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:57:20.740813  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.749357  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.758257  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.767201  189215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:57:20.775190  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.783656  189215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.791696  189215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:57:20.800386  189215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:57:20.808060  189215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:57:20.815631  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:20.925930  189215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:57:21.067006  189215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:57:21.067071  189215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:57:21.070804  189215 start.go:563] Will wait 60s for crictl version
	I1008 22:57:21.070866  189215 ssh_runner.go:195] Run: which crictl
	I1008 22:57:21.074187  189215 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:57:21.098882  189215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:57:21.099037  189215 ssh_runner.go:195] Run: crio --version
	I1008 22:57:21.129152  189215 ssh_runner.go:195] Run: crio --version
	I1008 22:57:21.159678  189215 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:57:21.162526  189215 cli_runner.go:164] Run: docker network inspect no-preload-939665 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:57:21.182847  189215 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:57:21.186792  189215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:57:21.196569  189215 kubeadm.go:883] updating cluster {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:57:21.196696  189215 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:57:21.196743  189215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:57:21.234573  189215 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:57:21.234598  189215 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:57:21.234606  189215 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 22:57:21.234750  189215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-939665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:57:21.234830  189215 ssh_runner.go:195] Run: crio config
	I1008 22:57:21.292904  189215 cni.go:84] Creating CNI manager for ""
	I1008 22:57:21.292932  189215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:57:21.292950  189215 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:57:21.292972  189215 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-939665 NodeName:no-preload-939665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:57:21.293101  189215 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-939665"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:57:21.293173  189215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:57:21.301074  189215 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:57:21.301163  189215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:57:21.308677  189215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 22:57:21.321204  189215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:57:21.333547  189215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1008 22:57:21.346162  189215 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:57:21.350364  189215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:57:21.360170  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:21.467099  189215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:57:21.481987  189215 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665 for IP: 192.168.85.2
	I1008 22:57:21.482060  189215 certs.go:195] generating shared ca certs ...
	I1008 22:57:21.482092  189215 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:21.482258  189215 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:57:21.482339  189215 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:57:21.482373  189215 certs.go:257] generating profile certs ...
	I1008 22:57:21.482513  189215 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.key
	I1008 22:57:21.482622  189215 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key.108ea954
	I1008 22:57:21.482693  189215 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key
	I1008 22:57:21.482836  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:57:21.482893  189215 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:57:21.482922  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:57:21.482982  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:57:21.483035  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:57:21.483093  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:57:21.483163  189215 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:57:21.483813  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:57:21.502778  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:57:21.520733  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:57:21.537842  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:57:21.559178  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 22:57:21.579614  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:57:21.600183  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:57:21.622833  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 22:57:21.643796  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:57:21.664903  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:57:21.687300  189215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:57:21.707757  189215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:57:21.721225  189215 ssh_runner.go:195] Run: openssl version
	I1008 22:57:21.727932  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:57:21.736231  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.740177  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.740254  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:57:21.787097  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:57:21.794806  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:57:21.802792  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.809303  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.809402  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:57:21.851934  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:57:21.860228  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:57:21.868365  189215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.872140  189215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.872222  189215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:57:21.913723  189215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:57:21.921382  189215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:57:21.925115  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 22:57:21.966240  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 22:57:22.008960  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 22:57:22.050751  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 22:57:22.105518  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 22:57:22.176820  189215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 22:57:22.232949  189215 kubeadm.go:400] StartCluster: {Name:no-preload-939665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-939665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:57:22.233035  189215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:57:22.233090  189215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:57:22.302288  189215 cri.go:89] found id: "22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3"
	I1008 22:57:22.302311  189215 cri.go:89] found id: "f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d"
	I1008 22:57:22.302317  189215 cri.go:89] found id: "e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e"
	I1008 22:57:22.302331  189215 cri.go:89] found id: "fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee"
	I1008 22:57:22.302335  189215 cri.go:89] found id: ""
	I1008 22:57:22.302398  189215 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 22:57:22.323705  189215 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I1008 22:57:22.323795  189215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:57:22.336052  189215 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 22:57:22.336070  189215 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 22:57:22.336119  189215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 22:57:22.351401  189215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 22:57:22.351898  189215 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-939665" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:22.352007  189215 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-939665" cluster setting kubeconfig missing "no-preload-939665" context setting]
	I1008 22:57:22.352298  189215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.353574  189215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 22:57:22.366548  189215 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 22:57:22.366580  189215 kubeadm.go:601] duration metric: took 30.503126ms to restartPrimaryControlPlane
	I1008 22:57:22.366591  189215 kubeadm.go:402] duration metric: took 133.650455ms to StartCluster
	I1008 22:57:22.366606  189215 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.366672  189215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:57:22.367360  189215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:57:22.367593  189215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:57:22.367913  189215 config.go:182] Loaded profile config "no-preload-939665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:57:22.367964  189215 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:57:22.368030  189215 addons.go:69] Setting storage-provisioner=true in profile "no-preload-939665"
	I1008 22:57:22.368057  189215 addons.go:238] Setting addon storage-provisioner=true in "no-preload-939665"
	W1008 22:57:22.368070  189215 addons.go:247] addon storage-provisioner should already be in state true
	I1008 22:57:22.368095  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.368651  189215 addons.go:69] Setting dashboard=true in profile "no-preload-939665"
	I1008 22:57:22.368675  189215 addons.go:238] Setting addon dashboard=true in "no-preload-939665"
	W1008 22:57:22.368687  189215 addons.go:247] addon dashboard should already be in state true
	I1008 22:57:22.368707  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.369171  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.369483  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.369909  189215 addons.go:69] Setting default-storageclass=true in profile "no-preload-939665"
	I1008 22:57:22.369931  189215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-939665"
	I1008 22:57:22.370206  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.373239  189215 out.go:179] * Verifying Kubernetes components...
	I1008 22:57:22.376522  189215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:57:22.435767  189215 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 22:57:22.435850  189215 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:57:22.437591  189215 addons.go:238] Setting addon default-storageclass=true in "no-preload-939665"
	W1008 22:57:22.437613  189215 addons.go:247] addon default-storageclass should already be in state true
	I1008 22:57:22.437660  189215 host.go:66] Checking if "no-preload-939665" exists ...
	I1008 22:57:22.438223  189215 cli_runner.go:164] Run: docker container inspect no-preload-939665 --format={{.State.Status}}
	I1008 22:57:22.438823  189215 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:57:22.438847  189215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:57:22.438905  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.442056  189215 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 22:57:22.453816  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 22:57:22.453845  189215 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 22:57:22.453928  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.489201  189215 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:57:22.489223  189215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:57:22.489292  189215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-939665
	I1008 22:57:22.493728  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.506546  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.526057  189215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33066 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/no-preload-939665/id_rsa Username:docker}
	I1008 22:57:22.731621  189215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:57:22.754398  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:57:22.802907  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 22:57:22.802933  189215 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 22:57:22.820730  189215 node_ready.go:35] waiting up to 6m0s for node "no-preload-939665" to be "Ready" ...
	I1008 22:57:22.834337  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:57:22.867847  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 22:57:22.867873  189215 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 22:57:22.958881  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 22:57:22.959044  189215 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 22:57:23.009828  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 22:57:23.009895  189215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 22:57:23.051687  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 22:57:23.051760  189215 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 22:57:23.078571  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 22:57:23.078656  189215 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 22:57:23.095149  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 22:57:23.095223  189215 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 22:57:23.110652  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 22:57:23.110724  189215 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 22:57:23.146912  189215 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:57:23.146978  189215 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 22:57:23.172565  189215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 22:57:26.428523  189215 node_ready.go:49] node "no-preload-939665" is "Ready"
	I1008 22:57:26.428555  189215 node_ready.go:38] duration metric: took 3.607792114s for node "no-preload-939665" to be "Ready" ...
	I1008 22:57:26.428570  189215 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:57:26.428661  189215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:57:26.609530  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.855058214s)
	I1008 22:57:27.790649  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.956230897s)
	I1008 22:57:27.790861  189215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.618214083s)
	I1008 22:57:27.791096  189215 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.362417244s)
	I1008 22:57:27.791160  189215 api_server.go:72] duration metric: took 5.423535251s to wait for apiserver process to appear ...
	I1008 22:57:27.791181  189215 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:57:27.791226  189215 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:57:27.794370  189215 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-939665 addons enable metrics-server
	
	I1008 22:57:27.797309  189215 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1008 22:57:27.800172  189215 addons.go:514] duration metric: took 5.432173441s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1008 22:57:27.808726  189215 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 22:57:27.808753  189215 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 22:57:28.291323  189215 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 22:57:28.299370  189215 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 22:57:28.300462  189215 api_server.go:141] control plane version: v1.34.1
	I1008 22:57:28.300485  189215 api_server.go:131] duration metric: took 509.284275ms to wait for apiserver health ...
	I1008 22:57:28.300495  189215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:57:28.303496  189215 system_pods.go:59] 8 kube-system pods found
	I1008 22:57:28.303535  189215 system_pods.go:61] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:57:28.303567  189215 system_pods.go:61] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:57:28.303578  189215 system_pods.go:61] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:57:28.303587  189215 system_pods.go:61] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:57:28.303604  189215 system_pods.go:61] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:57:28.303610  189215 system_pods.go:61] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:57:28.303617  189215 system_pods.go:61] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:57:28.303636  189215 system_pods.go:61] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Running
	I1008 22:57:28.303647  189215 system_pods.go:74] duration metric: took 3.14283ms to wait for pod list to return data ...
	I1008 22:57:28.303663  189215 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:57:28.306192  189215 default_sa.go:45] found service account: "default"
	I1008 22:57:28.306220  189215 default_sa.go:55] duration metric: took 2.550603ms for default service account to be created ...
	I1008 22:57:28.306230  189215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:57:28.308825  189215 system_pods.go:86] 8 kube-system pods found
	I1008 22:57:28.308858  189215 system_pods.go:89] "coredns-66bc5c9577-wj8wf" [a4b8c0c9-d983-4a71-b7d3-6fd64717accb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:57:28.308868  189215 system_pods.go:89] "etcd-no-preload-939665" [3c4f4682-bfc7-46dc-9fe2-a192feee0706] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 22:57:28.308874  189215 system_pods.go:89] "kindnet-dhln4" [41ab815b-433a-4ad3-b87b-a95a7085d8a1] Running
	I1008 22:57:28.308881  189215 system_pods.go:89] "kube-apiserver-no-preload-939665" [2aa213b3-7163-4849-9598-4f385ff7af8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 22:57:28.308888  189215 system_pods.go:89] "kube-controller-manager-no-preload-939665" [53eff972-f642-4e8e-a68e-78fe6cb77041] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 22:57:28.308892  189215 system_pods.go:89] "kube-proxy-77lvp" [7ec24b36-a7d9-4675-8b2c-4d059ae0f4f2] Running
	I1008 22:57:28.308899  189215 system_pods.go:89] "kube-scheduler-no-preload-939665" [d4c7d02a-f1fa-487b-b48f-bcdec83da459] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 22:57:28.308909  189215 system_pods.go:89] "storage-provisioner" [c9b0b18d-b8ca-4994-99c4-d6485cc58032] Running
	I1008 22:57:28.308915  189215 system_pods.go:126] duration metric: took 2.680204ms to wait for k8s-apps to be running ...
	I1008 22:57:28.308929  189215 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:57:28.308984  189215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:57:28.322889  189215 system_svc.go:56] duration metric: took 13.951449ms WaitForService to wait for kubelet
	I1008 22:57:28.322918  189215 kubeadm.go:586] duration metric: took 5.955290813s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:57:28.322958  189215 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:57:28.328387  189215 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:57:28.328425  189215 node_conditions.go:123] node cpu capacity is 2
	I1008 22:57:28.328438  189215 node_conditions.go:105] duration metric: took 5.467412ms to run NodePressure ...
	I1008 22:57:28.328451  189215 start.go:241] waiting for startup goroutines ...
	I1008 22:57:28.328458  189215 start.go:246] waiting for cluster config update ...
	I1008 22:57:28.328473  189215 start.go:255] writing updated cluster config ...
	I1008 22:57:28.328760  189215 ssh_runner.go:195] Run: rm -f paused
	I1008 22:57:28.332532  189215 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:57:28.336688  189215 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 22:57:30.350864  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:32.843064  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:34.844285  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:36.845168  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:39.344059  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:41.842645  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:43.843301  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:46.342737  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:48.842335  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:50.842730  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:52.842806  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:55.342860  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:57:57.844337  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	W1008 22:58:00.353944  189215 pod_ready.go:104] pod "coredns-66bc5c9577-wj8wf" is not "Ready", error: <nil>
	I1008 22:58:01.343135  189215 pod_ready.go:94] pod "coredns-66bc5c9577-wj8wf" is "Ready"
	I1008 22:58:01.343163  189215 pod_ready.go:86] duration metric: took 33.006442095s for pod "coredns-66bc5c9577-wj8wf" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.346159  189215 pod_ready.go:83] waiting for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.350999  189215 pod_ready.go:94] pod "etcd-no-preload-939665" is "Ready"
	I1008 22:58:01.351028  189215 pod_ready.go:86] duration metric: took 4.841796ms for pod "etcd-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.353471  189215 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.358536  189215 pod_ready.go:94] pod "kube-apiserver-no-preload-939665" is "Ready"
	I1008 22:58:01.358567  189215 pod_ready.go:86] duration metric: took 5.065093ms for pod "kube-apiserver-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.361323  189215 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.541059  189215 pod_ready.go:94] pod "kube-controller-manager-no-preload-939665" is "Ready"
	I1008 22:58:01.541090  189215 pod_ready.go:86] duration metric: took 179.740333ms for pod "kube-controller-manager-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:01.741235  189215 pod_ready.go:83] waiting for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.141610  189215 pod_ready.go:94] pod "kube-proxy-77lvp" is "Ready"
	I1008 22:58:02.141660  189215 pod_ready.go:86] duration metric: took 400.391388ms for pod "kube-proxy-77lvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.340814  189215 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.741219  189215 pod_ready.go:94] pod "kube-scheduler-no-preload-939665" is "Ready"
	I1008 22:58:02.741265  189215 pod_ready.go:86] duration metric: took 400.423027ms for pod "kube-scheduler-no-preload-939665" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:58:02.741278  189215 pod_ready.go:40] duration metric: took 34.408667436s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:58:02.798065  189215 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:58:02.802090  189215 out.go:179] * Done! kubectl is now configured to use "no-preload-939665" cluster and "default" namespace by default
	I1008 22:58:16.165730  171796 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	I1008 22:58:16.166225  171796 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	I1008 22:58:16.166324  171796 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	I1008 22:58:16.166332  171796 kubeadm.go:318] 
	I1008 22:58:16.166422  171796 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1008 22:58:16.166504  171796 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1008 22:58:16.166591  171796 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1008 22:58:16.167679  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1008 22:58:16.167775  171796 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1008 22:58:16.168263  171796 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1008 22:58:16.168455  171796 kubeadm.go:318] 
	I1008 22:58:16.172618  171796 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:58:16.172842  171796 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:58:16.172947  171796 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:58:16.173497  171796 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1008 22:58:16.173565  171796 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1008 22:58:16.173617  171796 kubeadm.go:402] duration metric: took 8m14.733249742s to StartCluster
	I1008 22:58:16.173680  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1008 22:58:16.173740  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1008 22:58:16.201140  171796 cri.go:89] found id: ""
	I1008 22:58:16.201170  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.201184  171796 logs.go:284] No container was found matching "kube-apiserver"
	I1008 22:58:16.201191  171796 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1008 22:58:16.201248  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1008 22:58:16.234251  171796 cri.go:89] found id: ""
	I1008 22:58:16.234272  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.234280  171796 logs.go:284] No container was found matching "etcd"
	I1008 22:58:16.234288  171796 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1008 22:58:16.234349  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1008 22:58:16.269941  171796 cri.go:89] found id: ""
	I1008 22:58:16.269961  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.269969  171796 logs.go:284] No container was found matching "coredns"
	I1008 22:58:16.269975  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1008 22:58:16.270030  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1008 22:58:16.304017  171796 cri.go:89] found id: ""
	I1008 22:58:16.304038  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.304046  171796 logs.go:284] No container was found matching "kube-scheduler"
	I1008 22:58:16.304053  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1008 22:58:16.304110  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1008 22:58:16.340131  171796 cri.go:89] found id: ""
	I1008 22:58:16.340156  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.340164  171796 logs.go:284] No container was found matching "kube-proxy"
	I1008 22:58:16.340171  171796 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1008 22:58:16.340228  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1008 22:58:16.381588  171796 cri.go:89] found id: ""
	I1008 22:58:16.381610  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.381618  171796 logs.go:284] No container was found matching "kube-controller-manager"
	I1008 22:58:16.381625  171796 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1008 22:58:16.381708  171796 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1008 22:58:16.425264  171796 cri.go:89] found id: ""
	I1008 22:58:16.425286  171796 logs.go:282] 0 containers: []
	W1008 22:58:16.425294  171796 logs.go:284] No container was found matching "kindnet"
	I1008 22:58:16.425303  171796 logs.go:123] Gathering logs for kubelet ...
	I1008 22:58:16.425314  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 22:58:16.585708  171796 logs.go:123] Gathering logs for dmesg ...
	I1008 22:58:16.586881  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 22:58:16.610434  171796 logs.go:123] Gathering logs for describe nodes ...
	I1008 22:58:16.610516  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1008 22:58:17.067127  171796 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1008 22:58:17.056924    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.058088    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.059037    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.060775    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.061073    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1008 22:58:17.056924    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.058088    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.059037    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.060775    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1008 22:58:17.061073    2371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1008 22:58:17.067153  171796 logs.go:123] Gathering logs for CRI-O ...
	I1008 22:58:17.067166  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1008 22:58:17.156520  171796 logs.go:123] Gathering logs for container status ...
	I1008 22:58:17.156557  171796 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1008 22:58:17.208767  171796 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1008 22:58:17.208818  171796 out.go:285] * 
	W1008 22:58:17.208871  171796 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:58:17.208889  171796 out.go:285] * 
	W1008 22:58:17.211057  171796 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 22:58:17.217037  171796 out.go:203] 
	W1008 22:58:17.219364  171796 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.005376143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000760899s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001366461s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001201651s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1008 22:58:17.219405  171796 out.go:285] * 
	I1008 22:58:17.225140  171796 out.go:203] 
	
	
	==> CRI-O <==
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.888125718Z" level=info msg="Removing container: d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7" id=5154abab-bafb-4ab3-8863-bce272cb72f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.904959072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.905236137Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a14d0df2d057639936f6018c94d29381c1e6bf90c5dfd94d2a0bfe136c515c75/merged/etc/passwd: no such file or directory"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.905278903Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a14d0df2d057639936f6018c94d29381c1e6bf90c5dfd94d2a0bfe136c515c75/merged/etc/group: no such file or directory"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.906539865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.915999086Z" level=info msg="Error loading conmon cgroup of container d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7: cgroup deleted" id=5154abab-bafb-4ab3-8863-bce272cb72f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.927572554Z" level=info msg="Removed container d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs/dashboard-metrics-scraper" id=5154abab-bafb-4ab3-8863-bce272cb72f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.938080747Z" level=info msg="Created container 92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20: kube-system/storage-provisioner/storage-provisioner" id=2f07cf73-baa6-49f7-8a35-4ea34cd4708d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.939146909Z" level=info msg="Starting container: 92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20" id=a15b8902-f347-438c-b025-59703d9df8c1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:57:57 no-preload-939665 crio[649]: time="2025-10-08T22:57:57.942544249Z" level=info msg="Started container" PID=1629 containerID=92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20 description=kube-system/storage-provisioner/storage-provisioner id=a15b8902-f347-438c-b025-59703d9df8c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a2e9aa746b8090e32353a12b0a2f1252ac263167a34631fae4d71ef6f6d254ed
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.506135373Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.512555181Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.512591834Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.512614792Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.515933042Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.515966782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.515993457Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.519723601Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.51975602Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.519779905Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.522819145Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.522852729Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.522874785Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.525923838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 22:58:07 no-preload-939665 crio[649]: time="2025-10-08T22:58:07.525958571Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	92514c9dbe0b3       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago      Running             storage-provisioner         2                   a2e9aa746b809       storage-provisioner                          kube-system
	8a83632a73b79       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   421ea0f5d2c6e       dashboard-metrics-scraper-6ffb444bf9-cz2qs   kubernetes-dashboard
	156ae21a19158       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago      Running             kubernetes-dashboard        0                   b8f42166df0ab       kubernetes-dashboard-855c9754f9-f6ktf        kubernetes-dashboard
	bade74eb19946       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago      Running             coredns                     1                   802c77776e892       coredns-66bc5c9577-wj8wf                     kube-system
	e863c168a0e85       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   f8ec2a06eefb5       busybox                                      default
	c0d2286c0fb19       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago      Running             kube-proxy                  1                   201c7dea97bcf       kube-proxy-77lvp                             kube-system
	c28c75461cf86       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago      Exited              storage-provisioner         1                   a2e9aa746b809       storage-provisioner                          kube-system
	1099fb7bc0b5a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago      Running             kindnet-cni                 1                   bc4286483b119       kindnet-dhln4                                kube-system
	22fc15165b261       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   2fc82623bdfc2       kube-scheduler-no-preload-939665             kube-system
	f8d8050a525b6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   c6a3eb9c84141       etcd-no-preload-939665                       kube-system
	e70ea0acf9870       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   519f888d62fb3       kube-controller-manager-no-preload-939665    kube-system
	fab90393033f5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   f18b7611d22c4       kube-apiserver-no-preload-939665             kube-system
	
	
	==> coredns [bade74eb19946af21f5ffbfb4ffa4e7f81bb41043453f2dca89df500be9f1376] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46218 - 2562 "HINFO IN 198312727653361217.8862779728426046954. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01928423s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-939665
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-939665
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=no-preload-939665
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_56_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:56:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-939665
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:58:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:57:57 +0000   Wed, 08 Oct 2025 22:56:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-939665
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 529d733e216047a8a089b00ae851c5b5
	  System UUID:                bdda0eaf-05ab-4058-9e68-44ec4f323643
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-wj8wf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-no-preload-939665                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         110s
	  kube-system                 kindnet-dhln4                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-939665              250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-939665     200m (10%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-77lvp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-939665              100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cz2qs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f6ktf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 104s                 kube-proxy       
	  Normal   Starting                 53s                  kube-proxy       
	  Warning  CgroupV1                 119s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x8 over 119s)  kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 111s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           106s                 node-controller  Node no-preload-939665 event: Registered Node no-preload-939665 in Controller
	  Normal   NodeReady                92s                  kubelet          Node no-preload-939665 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-939665 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-939665 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-939665 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node no-preload-939665 event: Registered Node no-preload-939665 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:28] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:29] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f8d8050a525b66b1f6059b9bef9774b0a018d7f0b512729419df31644ff85c2d] <==
	{"level":"warn","ts":"2025-10-08T22:57:24.747685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.767213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.783148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.806919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.826575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.843854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.856157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.877532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.890641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.947615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:24.987460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.019194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.057998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.099381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.127379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.158378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.195906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.222963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.278759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.317993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.379981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.432475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.458119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.478375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:57:25.558356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53690","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:58:20 up  1:40,  0 user,  load average: 1.71, 1.51, 1.67
	Linux no-preload-939665 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1099fb7bc0b5a6a715edc1ae2c1822b4f424b055875ea1147123708dbca0e939] <==
	I1008 22:57:27.307242       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:57:27.307882       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 22:57:27.308148       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:57:27.308196       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:57:27.308237       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:57:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:57:27.504081       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:57:27.504155       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:57:27.504188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:57:27.505083       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:57:57.505179       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 22:57:57.505191       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:57:57.505299       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:57:57.505406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 22:57:59.104769       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:57:59.104825       1 metrics.go:72] Registering metrics
	I1008 22:57:59.104887       1 controller.go:711] "Syncing nftables rules"
	I1008 22:58:07.505737       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:58:07.505828       1 main.go:301] handling current node
	I1008 22:58:17.512584       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:58:17.512615       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fab90393033f57458857473a4b92f90f061b427583bfdde329136620a71abcee] <==
	I1008 22:57:26.462144       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 22:57:26.500545       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 22:57:26.500581       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 22:57:26.500698       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 22:57:26.500794       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 22:57:26.500842       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 22:57:26.511810       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 22:57:26.511984       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1008 22:57:26.512000       1 policy_source.go:240] refreshing policies
	I1008 22:57:26.512173       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:57:26.520740       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 22:57:26.526948       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 22:57:26.527017       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 22:57:26.558361       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:57:26.764335       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:57:27.133150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:57:27.446595       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 22:57:27.515076       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 22:57:27.560478       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:57:27.578397       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:57:27.675445       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.246.240"}
	I1008 22:57:27.726414       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.10.127"}
	I1008 22:57:29.735881       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:57:29.976499       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:57:30.179378       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e70ea0acf987029e54c7b861915d0152d9b02ade1e0875e36f54a30ca0b4114e] <==
	I1008 22:57:29.734616       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1008 22:57:29.736754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:57:29.739227       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:57:29.741656       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 22:57:29.743892       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 22:57:29.745066       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:57:29.764901       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:57:29.769915       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:57:29.770011       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:57:29.770052       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1008 22:57:29.770098       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 22:57:29.770279       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 22:57:29.770333       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 22:57:29.770370       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 22:57:29.770547       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-939665"
	I1008 22:57:29.770599       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 22:57:29.771023       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 22:57:29.771242       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1008 22:57:29.771300       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 22:57:29.782562       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 22:57:29.787906       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 22:57:29.802356       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:57:29.802383       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:57:29.802391       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:57:29.808956       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c0d2286c0fb19de49b39e27723286f23f37dd0279a1348cf94a2b65a52a99273] <==
	I1008 22:57:27.367399       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:57:27.567344       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:57:27.668299       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:57:27.668341       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 22:57:27.669220       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:57:27.772927       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:57:27.772988       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:57:27.798866       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:57:27.799219       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:57:27.799419       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:57:27.800675       1 config.go:200] "Starting service config controller"
	I1008 22:57:27.800753       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:57:27.800808       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:57:27.800838       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:57:27.800899       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:57:27.800927       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:57:27.801582       1 config.go:309] "Starting node config controller"
	I1008 22:57:27.801794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:57:27.801845       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:57:27.901244       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 22:57:27.901287       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:57:27.901256       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [22fc15165b261a32940f2dedd3cd49b69d20e5e7e6bd128a867f2fd9e14ac7b3] <==
	I1008 22:57:23.316242       1 serving.go:386] Generated self-signed cert in-memory
	I1008 22:57:26.529545       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 22:57:26.529578       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:57:26.544688       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 22:57:26.544724       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 22:57:26.544763       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:57:26.544770       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 22:57:26.544784       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:57:26.544791       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:57:26.545882       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 22:57:26.546122       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 22:57:26.648011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 22:57:26.648082       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 22:57:26.648184       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.567793     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9tq\" (UniqueName: \"kubernetes.io/projected/ed4722e2-72aa-4561-81bb-11312618fca8-kube-api-access-fg9tq\") pod \"kubernetes-dashboard-855c9754f9-f6ktf\" (UID: \"ed4722e2-72aa-4561-81bb-11312618fca8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6ktf"
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.567820     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed4722e2-72aa-4561-81bb-11312618fca8-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-f6ktf\" (UID: \"ed4722e2-72aa-4561-81bb-11312618fca8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6ktf"
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.567841     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7f63cecd-fc6f-4f13-a5f1-d2a083f5417a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-cz2qs\" (UID: \"7f63cecd-fc6f-4f13-a5f1-d2a083f5417a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs"
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: W1008 22:57:30.727219     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-b8f42166df0ab7f85a036b1aad8047caa5406bcb98bd62fc0edf3db4d9185542 WatchSource:0}: Error finding container b8f42166df0ab7f85a036b1aad8047caa5406bcb98bd62fc0edf3db4d9185542: Status 404 returned error can't find the container with id b8f42166df0ab7f85a036b1aad8047caa5406bcb98bd62fc0edf3db4d9185542
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: W1008 22:57:30.730933     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28f143a4ef4af5cf31a182d10ed42603658f2ab1bcbda405bc144340b038cea4/crio-421ea0f5d2c6ebb976b5f34e759416a5c06a4b8bb32c93ad4392b7a77fa7a9aa WatchSource:0}: Error finding container 421ea0f5d2c6ebb976b5f34e759416a5c06a4b8bb32c93ad4392b7a77fa7a9aa: Status 404 returned error can't find the container with id 421ea0f5d2c6ebb976b5f34e759416a5c06a4b8bb32c93ad4392b7a77fa7a9aa
	Oct 08 22:57:30 no-preload-939665 kubelet[765]: I1008 22:57:30.914280     765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 08 22:57:35 no-preload-939665 kubelet[765]: I1008 22:57:35.831699     765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6ktf" podStartSLOduration=1.2311345949999999 podStartE2EDuration="5.831679966s" podCreationTimestamp="2025-10-08 22:57:30 +0000 UTC" firstStartedPulling="2025-10-08 22:57:30.731119284 +0000 UTC m=+9.247907278" lastFinishedPulling="2025-10-08 22:57:35.331664664 +0000 UTC m=+13.848452649" observedRunningTime="2025-10-08 22:57:35.831390633 +0000 UTC m=+14.348178651" watchObservedRunningTime="2025-10-08 22:57:35.831679966 +0000 UTC m=+14.348467951"
	Oct 08 22:57:39 no-preload-939665 kubelet[765]: I1008 22:57:39.827839     765 scope.go:117] "RemoveContainer" containerID="70fca093a03ca4d0baa22b2a30aba9f2b2478ea60950940fedce5c9b4f3def00"
	Oct 08 22:57:40 no-preload-939665 kubelet[765]: I1008 22:57:40.832484     765 scope.go:117] "RemoveContainer" containerID="70fca093a03ca4d0baa22b2a30aba9f2b2478ea60950940fedce5c9b4f3def00"
	Oct 08 22:57:40 no-preload-939665 kubelet[765]: I1008 22:57:40.832864     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:40 no-preload-939665 kubelet[765]: E1008 22:57:40.833038     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:57:41 no-preload-939665 kubelet[765]: I1008 22:57:41.837121     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:41 no-preload-939665 kubelet[765]: E1008 22:57:41.842990     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:57:43 no-preload-939665 kubelet[765]: I1008 22:57:43.141364     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:43 no-preload-939665 kubelet[765]: E1008 22:57:43.141573     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.657625     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.875131     765 scope.go:117] "RemoveContainer" containerID="c28c75461cf867bdf283e13c269bfe255b9c7fc15ced477eb8b068c032bc4178"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.885268     765 scope.go:117] "RemoveContainer" containerID="d123de1754be3761bdd8aedd0d3b802c2648897e21273cbc9bf63b763802a0a7"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: I1008 22:57:57.885605     765 scope.go:117] "RemoveContainer" containerID="8a83632a73b7920e80de176c3a5ba53ba3266776a89382be87f4612c3f712fe1"
	Oct 08 22:57:57 no-preload-939665 kubelet[765]: E1008 22:57:57.885909     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:58:03 no-preload-939665 kubelet[765]: I1008 22:58:03.145568     765 scope.go:117] "RemoveContainer" containerID="8a83632a73b7920e80de176c3a5ba53ba3266776a89382be87f4612c3f712fe1"
	Oct 08 22:58:03 no-preload-939665 kubelet[765]: E1008 22:58:03.146559     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cz2qs_kubernetes-dashboard(7f63cecd-fc6f-4f13-a5f1-d2a083f5417a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cz2qs" podUID="7f63cecd-fc6f-4f13-a5f1-d2a083f5417a"
	Oct 08 22:58:14 no-preload-939665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 22:58:15 no-preload-939665 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 22:58:15 no-preload-939665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [156ae21a191583af601f44668a0ae6339b9eb2752a19bb2691e28827eb9f58b2] <==
	2025/10/08 22:57:35 Using namespace: kubernetes-dashboard
	2025/10/08 22:57:35 Using in-cluster config to connect to apiserver
	2025/10/08 22:57:35 Using secret token for csrf signing
	2025/10/08 22:57:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 22:57:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 22:57:35 Successful initial request to the apiserver, version: v1.34.1
	2025/10/08 22:57:35 Generating JWE encryption key
	2025/10/08 22:57:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 22:57:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 22:57:35 Initializing JWE encryption key from synchronized object
	2025/10/08 22:57:35 Creating in-cluster Sidecar client
	2025/10/08 22:57:35 Serving insecurely on HTTP port: 9090
	2025/10/08 22:57:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 22:58:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 22:57:35 Starting overwatch
	
	
	==> storage-provisioner [92514c9dbe0b35e5e26afc0c8b051ee4d584b2c2e2b19007c6855bb5c1ca2a20] <==
	I1008 22:57:57.959763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:57:57.972319       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:57:57.972446       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 22:57:57.976261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:01.431814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:05.691802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:09.289743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:12.343723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:15.366700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:15.374558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:58:15.374703       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:58:15.374848       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-939665_39053aaa-5595-41ac-835d-a61b6438acc8!
	I1008 22:58:15.375777       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d7db233-93f3-4724-94fd-ba2ce2cb320c", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-939665_39053aaa-5595-41ac-835d-a61b6438acc8 became leader
	W1008 22:58:15.382031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:15.387792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:58:15.476454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-939665_39053aaa-5595-41ac-835d-a61b6438acc8!
	W1008 22:58:17.391060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:17.398776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:19.402194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:58:19.410126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c28c75461cf867bdf283e13c269bfe255b9c7fc15ced477eb8b068c032bc4178] <==
	I1008 22:57:27.371437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 22:57:57.379000       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-939665 -n no-preload-939665
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-939665 -n no-preload-939665: exit status 2 (502.172287ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-939665 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (269.898463ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T22:59:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-825429 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-825429 describe deploy/metrics-server -n kube-system: exit status 1 (90.592429ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-825429 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-825429
helpers_test.go:243: (dbg) docker inspect embed-certs-825429:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687",
	        "Created": "2025-10-08T22:58:27.270368583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194238,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:58:27.390716395Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/hostname",
	        "HostsPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/hosts",
	        "LogPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687-json.log",
	        "Name": "/embed-certs-825429",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-825429:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-825429",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687",
	                "LowerDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-825429",
	                "Source": "/var/lib/docker/volumes/embed-certs-825429/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-825429",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-825429",
	                "name.minikube.sigs.k8s.io": "embed-certs-825429",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "379e69f3077fdd74c1e4851b13bec8074126f7896dee068a106732ab260e0a54",
	            "SandboxKey": "/var/run/docker/netns/379e69f3077f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-825429": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:06:38:e1:0b:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c72f626705cdbf95a7acf2a18c80971f9e1c7948333cf514c2faeca371944562",
	                    "EndpointID": "2ac1c9f3ccb0e82f9e44b4b59bb14e56c44ba6d8d178ef16ad7e57d21b4b2d53",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-825429",
	                        "3489ded6521e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429
E1008 22:59:59.436267    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-825429 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-825429 logs -n 25: (2.85061421s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-378019                                                                                                                                                                                                                        │ cert-options-378019          │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:53 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                                                                                                    │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                                                                                                  │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                                                                                          │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                                                                                          │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-036919                                                                                                                                                                                                               │ disable-driver-mounts-036919 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:58:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:58:25.990357  193942 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:58:25.990578  193942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:58:25.990605  193942 out.go:374] Setting ErrFile to fd 2...
	I1008 22:58:25.990630  193942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:58:25.990927  193942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:58:25.991391  193942 out.go:368] Setting JSON to false
	I1008 22:58:25.992267  193942 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6056,"bootTime":1759958250,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:58:25.992367  193942 start.go:141] virtualization:  
	I1008 22:58:25.995312  193942 out.go:179] * [default-k8s-diff-port-779490] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:58:25.997266  193942 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:58:25.997336  193942 notify.go:220] Checking for updates...
	I1008 22:58:26.002904  193942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:58:26.004374  193942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:58:26.014387  193942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:58:26.017035  193942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:58:26.018639  193942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:58:26.020743  193942 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:58:26.020929  193942 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:58:26.065098  193942 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:58:26.065244  193942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:58:26.146197  193942 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-08 22:58:26.134768696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:58:26.146306  193942 docker.go:318] overlay module found
	I1008 22:58:26.148990  193942 out.go:179] * Using the docker driver based on user configuration
	I1008 22:58:26.150220  193942 start.go:305] selected driver: docker
	I1008 22:58:26.150240  193942 start.go:925] validating driver "docker" against <nil>
	I1008 22:58:26.150254  193942 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:58:26.151017  193942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:58:26.229160  193942 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-08 22:58:26.219012889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:58:26.229547  193942 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:58:26.230113  193942 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:58:26.232412  193942 out.go:179] * Using Docker driver with root privileges
	I1008 22:58:26.234018  193942 cni.go:84] Creating CNI manager for ""
	I1008 22:58:26.234095  193942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:26.234110  193942 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:58:26.234188  193942 start.go:349] cluster config:
	{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:58:26.236351  193942 out.go:179] * Starting "default-k8s-diff-port-779490" primary control-plane node in "default-k8s-diff-port-779490" cluster
	I1008 22:58:26.237799  193942 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:58:26.239088  193942 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:58:26.240366  193942 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:26.240421  193942 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 22:58:26.240434  193942 cache.go:58] Caching tarball of preloaded images
	I1008 22:58:26.240522  193942 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:58:26.240537  193942 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 22:58:26.240637  193942 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 22:58:26.240661  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json: {Name:mkabb98c8b8938b0afd74c24337d3cb6e526a1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:26.240805  193942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:58:26.262793  193942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:58:26.262821  193942 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:58:26.262847  193942 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:58:26.262870  193942 start.go:360] acquireMachinesLock for default-k8s-diff-port-779490: {Name:mkf9138008d7ef2884518c448a03b33b088d9068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:58:26.262995  193942 start.go:364] duration metric: took 103.862µs to acquireMachinesLock for "default-k8s-diff-port-779490"
	I1008 22:58:26.263025  193942 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:58:26.263090  193942 start.go:125] createHost starting for "" (driver="docker")
	I1008 22:58:22.293446  193267 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:58:22.293805  193267 start.go:159] libmachine.API.Create for "embed-certs-825429" (driver="docker")
	I1008 22:58:22.293865  193267 client.go:168] LocalClient.Create starting
	I1008 22:58:22.293945  193267 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:58:22.293984  193267 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:22.294006  193267 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:22.294072  193267 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:58:22.294098  193267 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:22.294113  193267 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:22.294473  193267 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:58:22.310790  193267 cli_runner.go:211] docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:58:22.310860  193267 network_create.go:284] running [docker network inspect embed-certs-825429] to gather additional debugging logs...
	I1008 22:58:22.310884  193267 cli_runner.go:164] Run: docker network inspect embed-certs-825429
	W1008 22:58:22.330132  193267 cli_runner.go:211] docker network inspect embed-certs-825429 returned with exit code 1
	I1008 22:58:22.330168  193267 network_create.go:287] error running [docker network inspect embed-certs-825429]: docker network inspect embed-certs-825429: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-825429 not found
	I1008 22:58:22.330190  193267 network_create.go:289] output of [docker network inspect embed-certs-825429]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-825429 not found
	
	** /stderr **
	I1008 22:58:22.330270  193267 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:22.354321  193267 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:58:22.354995  193267 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:58:22.355573  193267 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:58:22.356165  193267 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cf220}
	I1008 22:58:22.356187  193267 network_create.go:124] attempt to create docker network embed-certs-825429 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1008 22:58:22.356321  193267 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-825429 embed-certs-825429
	I1008 22:58:22.431314  193267 network_create.go:108] docker network embed-certs-825429 192.168.76.0/24 created
	I1008 22:58:22.431349  193267 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-825429" container
	I1008 22:58:22.431421  193267 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:58:22.454085  193267 cli_runner.go:164] Run: docker volume create embed-certs-825429 --label name.minikube.sigs.k8s.io=embed-certs-825429 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:58:22.487590  193267 oci.go:103] Successfully created a docker volume embed-certs-825429
	I1008 22:58:22.487682  193267 cli_runner.go:164] Run: docker run --rm --name embed-certs-825429-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-825429 --entrypoint /usr/bin/test -v embed-certs-825429:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:58:23.122793  193267 oci.go:107] Successfully prepared a docker volume embed-certs-825429
	I1008 22:58:23.122854  193267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:23.122874  193267 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:58:23.122946  193267 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-825429:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:58:26.265952  193942 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:58:26.266184  193942 start.go:159] libmachine.API.Create for "default-k8s-diff-port-779490" (driver="docker")
	I1008 22:58:26.266218  193942 client.go:168] LocalClient.Create starting
	I1008 22:58:26.266282  193942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:58:26.266315  193942 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:26.266328  193942 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:26.266381  193942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:58:26.266401  193942 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:26.266410  193942 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:26.266883  193942 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:58:26.283531  193942 cli_runner.go:211] docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:58:26.283612  193942 network_create.go:284] running [docker network inspect default-k8s-diff-port-779490] to gather additional debugging logs...
	I1008 22:58:26.283629  193942 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490
	W1008 22:58:26.300290  193942 cli_runner.go:211] docker network inspect default-k8s-diff-port-779490 returned with exit code 1
	I1008 22:58:26.300318  193942 network_create.go:287] error running [docker network inspect default-k8s-diff-port-779490]: docker network inspect default-k8s-diff-port-779490: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-779490 not found
	I1008 22:58:26.300339  193942 network_create.go:289] output of [docker network inspect default-k8s-diff-port-779490]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-779490 not found
	
	** /stderr **
	I1008 22:58:26.300441  193942 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:26.316692  193942 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:58:26.317109  193942 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:58:26.317401  193942 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:58:26.317722  193942 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c72f626705cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:c4:86:26:e3:9b} reservation:<nil>}
	I1008 22:58:26.318168  193942 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b83e0}
	I1008 22:58:26.318194  193942 network_create.go:124] attempt to create docker network default-k8s-diff-port-779490 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 22:58:26.318252  193942 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 default-k8s-diff-port-779490
	I1008 22:58:26.394325  193942 network_create.go:108] docker network default-k8s-diff-port-779490 192.168.85.0/24 created
	I1008 22:58:26.394356  193942 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-779490" container
	I1008 22:58:26.394443  193942 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:58:26.410393  193942 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-779490 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:58:26.429093  193942 oci.go:103] Successfully created a docker volume default-k8s-diff-port-779490
	I1008 22:58:26.429189  193942 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-779490-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --entrypoint /usr/bin/test -v default-k8s-diff-port-779490:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:58:27.814315  193942 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-779490-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --entrypoint /usr/bin/test -v default-k8s-diff-port-779490:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.385071285s)
	I1008 22:58:27.814341  193942 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-779490
	I1008 22:58:27.814364  193942 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:27.814383  193942 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:58:27.814448  193942 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-779490:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:58:27.118536  193267 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-825429:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (3.995551652s)
	I1008 22:58:27.118570  193267 kic.go:203] duration metric: took 3.995693407s to extract preloaded images to volume ...
	W1008 22:58:27.118742  193267 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:58:27.118882  193267 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:58:27.253113  193267 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-825429 --name embed-certs-825429 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-825429 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-825429 --network embed-certs-825429 --ip 192.168.76.2 --volume embed-certs-825429:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:58:27.673367  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Running}}
	I1008 22:58:27.712441  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:58:27.748089  193267 cli_runner.go:164] Run: docker exec embed-certs-825429 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:58:27.817037  193267 oci.go:144] the created container "embed-certs-825429" has a running status.
	I1008 22:58:27.817077  193267 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa...
	I1008 22:58:29.425227  193267 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:58:29.462916  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:58:29.497401  193267 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:58:29.497421  193267 kic_runner.go:114] Args: [docker exec --privileged embed-certs-825429 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:58:29.563858  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:58:29.593829  193267 machine.go:93] provisionDockerMachine start ...
	I1008 22:58:29.593931  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:29.625609  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:29.626002  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:29.626021  193267 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:58:29.627369  193267 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:58:32.245853  193942 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-779490:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.431348104s)
	I1008 22:58:32.245882  193942 kic.go:203] duration metric: took 4.431497201s to extract preloaded images to volume ...
	W1008 22:58:32.246018  193942 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:58:32.246126  193942 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:58:32.349126  193942 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-779490 --name default-k8s-diff-port-779490 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --network default-k8s-diff-port-779490 --ip 192.168.85.2 --volume default-k8s-diff-port-779490:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:58:32.727550  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Running}}
	I1008 22:58:32.749832  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:58:32.779701  193942 cli_runner.go:164] Run: docker exec default-k8s-diff-port-779490 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:58:32.840152  193942 oci.go:144] the created container "default-k8s-diff-port-779490" has a running status.
	I1008 22:58:32.840186  193942 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa...
	I1008 22:58:33.783624  193942 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:58:33.812895  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:58:33.834758  193942 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:58:33.834778  193942 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-779490 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:58:33.884178  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:58:33.905394  193942 machine.go:93] provisionDockerMachine start ...
	I1008 22:58:33.905487  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:33.932894  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:33.933220  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:33.933230  193942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:58:33.933970  193942 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50822->127.0.0.1:33076: read: connection reset by peer
	I1008 22:58:32.853473  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 22:58:32.853500  193267 ubuntu.go:182] provisioning hostname "embed-certs-825429"
	I1008 22:58:32.853923  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:32.897941  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:32.898248  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:32.898260  193267 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825429 && echo "embed-certs-825429" | sudo tee /etc/hostname
	I1008 22:58:33.126908  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 22:58:33.126979  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:33.185854  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:33.186159  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:33.186175  193267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825429' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825429/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825429' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:58:33.383523  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:58:33.383546  193267 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:58:33.383566  193267 ubuntu.go:190] setting up certificates
	I1008 22:58:33.383591  193267 provision.go:84] configureAuth start
	I1008 22:58:33.383705  193267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 22:58:33.444214  193267 provision.go:143] copyHostCerts
	I1008 22:58:33.444283  193267 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:58:33.444297  193267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:58:33.444379  193267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:58:33.444482  193267 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:58:33.444493  193267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:58:33.444523  193267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:58:33.444582  193267 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:58:33.444589  193267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:58:33.444614  193267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:58:33.444668  193267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825429 san=[127.0.0.1 192.168.76.2 embed-certs-825429 localhost minikube]
	I1008 22:58:33.683095  193267 provision.go:177] copyRemoteCerts
	I1008 22:58:33.683161  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:58:33.683205  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:33.700502  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:33.802650  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:58:33.825364  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 22:58:33.849051  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:58:33.888124  193267 provision.go:87] duration metric: took 504.48759ms to configureAuth
	I1008 22:58:33.888153  193267 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:58:33.888322  193267 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:58:33.888423  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:33.909410  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:33.909815  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:33.909851  193267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:58:34.253423  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:58:34.253443  193267 machine.go:96] duration metric: took 4.659590061s to provisionDockerMachine
	I1008 22:58:34.253454  193267 client.go:171] duration metric: took 11.959578193s to LocalClient.Create
	I1008 22:58:34.253470  193267 start.go:167] duration metric: took 11.959666694s to libmachine.API.Create "embed-certs-825429"
	I1008 22:58:34.253477  193267 start.go:293] postStartSetup for "embed-certs-825429" (driver="docker")
	I1008 22:58:34.253486  193267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:58:34.253550  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:58:34.253591  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.271288  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.375370  193267 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:58:34.379078  193267 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:58:34.379148  193267 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:58:34.379173  193267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:58:34.379270  193267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:58:34.379403  193267 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:58:34.379554  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:58:34.392792  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:34.423568  193267 start.go:296] duration metric: took 170.076155ms for postStartSetup
	I1008 22:58:34.424027  193267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 22:58:34.451266  193267 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/config.json ...
	I1008 22:58:34.451539  193267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:58:34.451579  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.475926  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.583740  193267 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:58:34.589471  193267 start.go:128] duration metric: took 12.299382125s to createHost
	I1008 22:58:34.589493  193267 start.go:83] releasing machines lock for "embed-certs-825429", held for 12.299512662s
	I1008 22:58:34.589577  193267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 22:58:34.615135  193267 ssh_runner.go:195] Run: cat /version.json
	I1008 22:58:34.615183  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.615417  193267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:58:34.615470  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.660097  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.667777  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.769606  193267 ssh_runner.go:195] Run: systemctl --version
	I1008 22:58:34.876441  193267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:58:34.918473  193267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:58:34.923838  193267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:58:34.923956  193267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:58:34.967286  193267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:58:34.967364  193267 start.go:495] detecting cgroup driver to use...
	I1008 22:58:34.967422  193267 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:58:34.967508  193267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:58:34.985916  193267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:58:34.999098  193267 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:58:34.999162  193267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:58:35.020431  193267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:58:35.040581  193267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:58:35.160089  193267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:58:35.293732  193267 docker.go:234] disabling docker service ...
	I1008 22:58:35.293800  193267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:58:35.314372  193267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:58:35.327382  193267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:58:35.455914  193267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:58:35.569247  193267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:58:35.582570  193267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:58:35.597097  193267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:58:35.597229  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.606209  193267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:58:35.606341  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.615508  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.624771  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.633844  193267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:58:35.641842  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.650476  193267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.664476  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.673185  193267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:58:35.681467  193267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:58:35.688784  193267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:35.793775  193267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:58:35.911022  193267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:58:35.911105  193267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:58:35.914929  193267 start.go:563] Will wait 60s for crictl version
	I1008 22:58:35.914993  193267 ssh_runner.go:195] Run: which crictl
	I1008 22:58:35.918532  193267 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:58:35.946686  193267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:58:35.946780  193267 ssh_runner.go:195] Run: crio --version
	I1008 22:58:35.975839  193267 ssh_runner.go:195] Run: crio --version
	I1008 22:58:36.012699  193267 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:58:36.015745  193267 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:36.033049  193267 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 22:58:36.037086  193267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:36.047762  193267 kubeadm.go:883] updating cluster {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:58:36.047897  193267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:36.047957  193267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:36.083068  193267 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:36.083095  193267 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:58:36.083188  193267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:36.109108  193267 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:36.109133  193267 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:58:36.109143  193267 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 22:58:36.109240  193267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-825429 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:58:36.109331  193267 ssh_runner.go:195] Run: crio config
	I1008 22:58:36.193225  193267 cni.go:84] Creating CNI manager for ""
	I1008 22:58:36.193258  193267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:36.193289  193267 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:58:36.193334  193267 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825429 NodeName:embed-certs-825429 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:58:36.193550  193267 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825429"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:58:36.193691  193267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:58:36.203082  193267 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:58:36.203191  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:58:36.211063  193267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1008 22:58:36.224482  193267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:58:36.239799  193267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1008 22:58:36.253374  193267 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:58:36.257259  193267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:36.268856  193267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:36.384703  193267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:58:36.403325  193267 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429 for IP: 192.168.76.2
	I1008 22:58:36.403349  193267 certs.go:195] generating shared ca certs ...
	I1008 22:58:36.403365  193267 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:36.403538  193267 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:58:36.403603  193267 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:58:36.403617  193267 certs.go:257] generating profile certs ...
	I1008 22:58:36.403701  193267 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key
	I1008 22:58:36.403719  193267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.crt with IP's: []
	I1008 22:58:37.061191  193267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.crt ...
	I1008 22:58:37.061224  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.crt: {Name:mkdf8e21f9059b7b8a2cb821778833bc60d65743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.061455  193267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key ...
	I1008 22:58:37.061471  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key: {Name:mkfa72764401323eced50bfab5c424645f2285c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.061602  193267 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3
	I1008 22:58:37.061623  193267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1008 22:58:37.640056  193267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3 ...
	I1008 22:58:37.640086  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3: {Name:mk639d77fd638bc7cf2bfdd5b5da4ff52e78a8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.640343  193267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3 ...
	I1008 22:58:37.640359  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3: {Name:mkd84f206e09830d5522cca9aeb26202b3227cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.640488  193267 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt
	I1008 22:58:37.640611  193267 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key
	I1008 22:58:37.640707  193267 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key
	I1008 22:58:37.640750  193267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt with IP's: []
	I1008 22:58:38.246731  193267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt ...
	I1008 22:58:38.246804  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt: {Name:mk28574d9b2c45516767271026e48f4821fd4994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:38.247050  193267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key ...
	I1008 22:58:38.247085  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key: {Name:mk1f3d84dcb2ff752724190c701ad4391a99be75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:38.247345  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:58:38.247411  193267 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:58:38.247439  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:58:38.247503  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:58:38.247565  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:58:38.247649  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:58:38.247727  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:38.248354  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:58:38.273018  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:58:38.294248  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:58:38.314440  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:58:38.335711  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 22:58:38.354730  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 22:58:38.376418  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:58:38.398265  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 22:58:38.417612  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:58:38.438812  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:58:38.459790  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:58:38.483561  193267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:58:38.502880  193267 ssh_runner.go:195] Run: openssl version
	I1008 22:58:38.510878  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:58:38.532793  193267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:58:38.541137  193267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:58:38.541199  193267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:58:38.590261  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:58:38.599831  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:58:38.610627  193267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:38.615087  193267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:38.615153  193267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:38.675167  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:58:38.686298  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:58:38.696623  193267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:58:38.701146  193267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:58:38.701220  193267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:58:38.749931  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:58:38.758877  193267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:58:38.763789  193267 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:58:38.763837  193267 kubeadm.go:400] StartCluster: {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:58:38.763905  193267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:58:38.763970  193267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:58:38.795549  193267 cri.go:89] found id: ""
	I1008 22:58:38.795625  193267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:58:38.805443  193267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:58:38.813573  193267 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:58:38.813701  193267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:58:38.821386  193267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:58:38.821410  193267 kubeadm.go:157] found existing configuration files:
	
	I1008 22:58:38.821469  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:58:38.829768  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:58:38.829845  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:58:38.837896  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:58:38.847236  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:58:38.847295  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:58:38.859690  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:58:38.870044  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:58:38.870105  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:58:38.882749  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:58:38.893303  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:58:38.893365  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:58:38.903901  193267 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:58:38.977974  193267 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:58:38.978345  193267 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:58:39.006429  193267 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:58:39.006781  193267 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:58:39.006833  193267 kubeadm.go:318] OS: Linux
	I1008 22:58:39.006887  193267 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:58:39.006941  193267 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:58:39.006995  193267 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:58:39.007049  193267 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:58:39.007115  193267 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:58:39.007170  193267 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:58:39.007221  193267 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:58:39.007276  193267 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:58:39.007329  193267 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:58:39.090608  193267 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:58:39.090723  193267 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:58:39.090818  193267 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:58:39.103102  193267 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:58:37.085738  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 22:58:37.085760  193942 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-779490"
	I1008 22:58:37.085819  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:37.125207  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:37.125516  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:37.125529  193942 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-779490 && echo "default-k8s-diff-port-779490" | sudo tee /etc/hostname
	I1008 22:58:37.304747  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 22:58:37.304882  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:37.327885  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:37.328194  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:37.328215  193942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-779490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-779490/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-779490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:58:37.478511  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:58:37.478587  193942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:58:37.478622  193942 ubuntu.go:190] setting up certificates
	I1008 22:58:37.478659  193942 provision.go:84] configureAuth start
	I1008 22:58:37.478761  193942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 22:58:37.501938  193942 provision.go:143] copyHostCerts
	I1008 22:58:37.502003  193942 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:58:37.502013  193942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:58:37.502088  193942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:58:37.502183  193942 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:58:37.502190  193942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:58:37.502222  193942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:58:37.502278  193942 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:58:37.502283  193942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:58:37.502307  193942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:58:37.502353  193942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-779490 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-779490 localhost minikube]
	I1008 22:58:37.996969  193942 provision.go:177] copyRemoteCerts
	I1008 22:58:37.997082  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:58:37.997150  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.025747  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.138270  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:58:38.158442  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 22:58:38.178797  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 22:58:38.198611  193942 provision.go:87] duration metric: took 719.911754ms to configureAuth
	I1008 22:58:38.198638  193942 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:58:38.198819  193942 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:58:38.198927  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.218397  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:38.218703  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:38.218724  193942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:58:38.515693  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:58:38.515755  193942 machine.go:96] duration metric: took 4.610341316s to provisionDockerMachine
	I1008 22:58:38.515789  193942 client.go:171] duration metric: took 12.249555066s to LocalClient.Create
	I1008 22:58:38.515836  193942 start.go:167] duration metric: took 12.249652995s to libmachine.API.Create "default-k8s-diff-port-779490"
	I1008 22:58:38.515866  193942 start.go:293] postStartSetup for "default-k8s-diff-port-779490" (driver="docker")
	I1008 22:58:38.515893  193942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:58:38.515993  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:58:38.516063  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.540990  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.647743  193942 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:58:38.651610  193942 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:58:38.651635  193942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:58:38.651645  193942 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:58:38.651697  193942 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:58:38.651779  193942 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:58:38.651880  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:58:38.661802  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:38.686000  193942 start.go:296] duration metric: took 170.105406ms for postStartSetup
	I1008 22:58:38.686470  193942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 22:58:38.707844  193942 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 22:58:38.708114  193942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:58:38.708173  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.731226  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.831933  193942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:58:38.837460  193942 start.go:128] duration metric: took 12.574356156s to createHost
	I1008 22:58:38.837487  193942 start.go:83] releasing machines lock for "default-k8s-diff-port-779490", held for 12.574479727s
	I1008 22:58:38.837558  193942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 22:58:38.863802  193942 ssh_runner.go:195] Run: cat /version.json
	I1008 22:58:38.863850  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.864085  193942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:58:38.864137  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.900231  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.913384  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:39.014810  193942 ssh_runner.go:195] Run: systemctl --version
	I1008 22:58:39.112380  193942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:58:39.168303  193942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:58:39.174524  193942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:58:39.174695  193942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:58:39.206521  193942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:58:39.206545  193942 start.go:495] detecting cgroup driver to use...
	I1008 22:58:39.206615  193942 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:58:39.206700  193942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:58:39.226905  193942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:58:39.241766  193942 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:58:39.241915  193942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:58:39.260899  193942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:58:39.283060  193942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:58:39.495447  193942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:58:39.673775  193942 docker.go:234] disabling docker service ...
	I1008 22:58:39.673852  193942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:58:39.698825  193942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:58:39.713795  193942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:58:39.856905  193942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:58:40.007009  193942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:58:40.024999  193942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:58:40.044193  193942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:58:40.044271  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.054527  193942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:58:40.054609  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.064733  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.075759  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.086551  193942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:58:40.098332  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.112746  193942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.129109  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.139485  193942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:58:40.148411  193942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:58:40.157545  193942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:40.300738  193942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:58:40.485242  193942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:58:40.485411  193942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:58:40.490402  193942 start.go:563] Will wait 60s for crictl version
	I1008 22:58:40.490522  193942 ssh_runner.go:195] Run: which crictl
	I1008 22:58:40.494619  193942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:58:40.520950  193942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:58:40.521129  193942 ssh_runner.go:195] Run: crio --version
	I1008 22:58:40.552869  193942 ssh_runner.go:195] Run: crio --version
	I1008 22:58:40.599308  193942 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:58:40.602116  193942 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:40.622876  193942 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:58:40.628292  193942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:40.641026  193942 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:58:40.641152  193942 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:40.641225  193942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:40.700347  193942 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:40.700374  193942 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:58:40.700452  193942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:40.734215  193942 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:40.734254  193942 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:58:40.734263  193942 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1008 22:58:40.734360  193942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-779490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:58:40.734491  193942 ssh_runner.go:195] Run: crio config
	I1008 22:58:40.817431  193942 cni.go:84] Creating CNI manager for ""
	I1008 22:58:40.817465  193942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:40.817486  193942 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:58:40.817508  193942 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-779490 NodeName:default-k8s-diff-port-779490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:58:40.817678  193942 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-779490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:58:40.817764  193942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:58:40.826337  193942 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:58:40.826414  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:58:40.835791  193942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1008 22:58:40.852793  193942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:58:40.866921  193942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1008 22:58:40.881794  193942 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:58:40.885818  193942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:40.895637  193942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:39.109546  193267 out.go:252]   - Generating certificates and keys ...
	I1008 22:58:39.109669  193267 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:58:39.109738  193267 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:58:39.281483  193267 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:58:39.646542  193267 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:58:40.040744  193267 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:58:40.635786  193267 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:58:41.761977  193267 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:58:41.762120  193267 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-825429 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:58:41.052887  193942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:58:41.069143  193942 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490 for IP: 192.168.85.2
	I1008 22:58:41.069168  193942 certs.go:195] generating shared ca certs ...
	I1008 22:58:41.069184  193942 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.069349  193942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:58:41.069407  193942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:58:41.069420  193942 certs.go:257] generating profile certs ...
	I1008 22:58:41.069492  193942 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key
	I1008 22:58:41.069524  193942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt with IP's: []
	I1008 22:58:41.610318  193942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt ...
	I1008 22:58:41.610352  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: {Name:mk6078106987510267b1b0e1a9d7470df5ff04d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.610543  193942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key ...
	I1008 22:58:41.610559  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key: {Name:mk0f7a7c762f34bcab92d826adf9dc16432a6764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.610650  193942 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765
	I1008 22:58:41.610668  193942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1008 22:58:41.820500  193942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765 ...
	I1008 22:58:41.820532  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765: {Name:mk3902d08d3c330d1c0272056ea7bfdcd8d45f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.820736  193942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765 ...
	I1008 22:58:41.820751  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765: {Name:mk69a7b7e64ea74a698b74781a61b3846d80a8e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.820843  193942 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt
	I1008 22:58:41.820927  193942 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key
	I1008 22:58:41.820990  193942 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key
	I1008 22:58:41.821008  193942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt with IP's: []
	I1008 22:58:42.407452  193942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt ...
	I1008 22:58:42.407535  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt: {Name:mka0acd8e40bb16d49f151b2b541fd8cbfc63c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:42.407820  193942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key ...
	I1008 22:58:42.407864  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key: {Name:mk8ccb71f5cb93a8a35fd14a12573f4c958bdc51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:42.408191  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:58:42.408283  193942 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:58:42.408342  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:58:42.408398  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:58:42.408482  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:58:42.408540  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:58:42.408636  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:42.409358  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:58:42.431160  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:58:42.451959  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:58:42.472746  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:58:42.494274  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 22:58:42.514679  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:58:42.534492  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:58:42.554936  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 22:58:42.576065  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:58:42.611063  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:58:42.682727  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:58:42.706413  193942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:58:42.721569  193942 ssh_runner.go:195] Run: openssl version
	I1008 22:58:42.728768  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:58:42.738781  193942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:42.743540  193942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:42.743611  193942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:42.785536  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:58:42.795377  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:58:42.804765  193942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:58:42.809669  193942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:58:42.809746  193942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:58:42.852259  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:58:42.862400  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:58:42.872402  193942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:58:42.877957  193942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:58:42.878032  193942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:58:42.925394  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:58:42.935052  193942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:58:42.940324  193942 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:58:42.940393  193942 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:58:42.940481  193942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:58:42.940556  193942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:58:42.976059  193942 cri.go:89] found id: ""
	I1008 22:58:42.976139  193942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:58:42.986937  193942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:58:42.995598  193942 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:58:42.995670  193942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:58:43.008167  193942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:58:43.008189  193942 kubeadm.go:157] found existing configuration files:
	
	I1008 22:58:43.008254  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 22:58:43.018575  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:58:43.018673  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:58:43.027633  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 22:58:43.037598  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:58:43.037691  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:58:43.046368  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 22:58:43.056394  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:58:43.056465  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:58:43.065105  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 22:58:43.076000  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:58:43.076077  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:58:43.084901  193942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:58:43.150179  193942 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:58:43.150600  193942 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:58:43.181801  193942 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:58:43.181897  193942 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:58:43.181951  193942 kubeadm.go:318] OS: Linux
	I1008 22:58:43.182016  193942 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:58:43.182088  193942 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:58:43.182155  193942 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:58:43.182210  193942 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:58:43.182280  193942 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:58:43.182346  193942 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:58:43.182409  193942 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:58:43.182473  193942 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:58:43.182539  193942 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:58:43.262230  193942 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:58:43.262357  193942 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:58:43.262473  193942 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:58:43.274179  193942 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:58:41.998064  193267 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:58:41.998223  193267 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-825429 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:58:42.582024  193267 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:58:42.856851  193267 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:58:43.453984  193267 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:58:43.454061  193267 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:58:44.013001  193267 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:58:44.315058  193267 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:58:44.434679  193267 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:58:44.866116  193267 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:58:45.173418  193267 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:58:45.174707  193267 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:58:45.178193  193267 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:58:43.279957  193942 out.go:252]   - Generating certificates and keys ...
	I1008 22:58:43.280071  193942 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:58:43.280149  193942 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:58:44.738056  193942 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:58:45.105959  193942 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:58:45.420866  193942 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:58:45.568191  193942 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:58:45.851033  193942 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:58:45.851636  193942 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-779490 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:58:45.181934  193267 out.go:252]   - Booting up control plane ...
	I1008 22:58:45.182055  193267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:58:45.183988  193267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:58:45.186213  193267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:58:45.210268  193267 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:58:45.210388  193267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:58:45.221375  193267 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:58:45.221487  193267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:58:45.221534  193267 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:58:45.420398  193267 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:58:45.420527  193267 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:58:46.921719  193267 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501644492s
	I1008 22:58:46.925300  193267 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:58:46.925401  193267 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1008 22:58:46.925647  193267 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:58:46.925739  193267 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:58:46.220524  193942 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:58:46.221139  193942 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-779490 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:58:46.459977  193942 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:58:47.214226  193942 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:58:48.096575  193942 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:58:48.096658  193942 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:58:48.517972  193942 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:58:49.076031  193942 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:58:50.064592  193942 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:58:50.098686  193942 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:58:50.921550  193942 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:58:50.922246  193942 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:58:50.924991  193942 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:58:50.928520  193942 out.go:252]   - Booting up control plane ...
	I1008 22:58:50.928633  193942 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:58:50.928718  193942 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:58:50.930110  193942 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:58:50.955071  193942 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:58:50.955182  193942 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:58:50.962921  193942 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:58:50.963256  193942 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:58:50.963304  193942 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:58:51.176857  193942 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:58:51.181205  193942 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:58:52.188911  193942 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003751893s
	I1008 22:58:52.189023  193942 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:58:52.189107  193942 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1008 22:58:52.189199  193942 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:58:52.189281  193942 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:58:52.415761  193267 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.48961388s
	I1008 22:58:54.595484  193267 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.670172572s
	I1008 22:58:56.427676  193267 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.502045243s
	I1008 22:58:56.460024  193267 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 22:58:56.485124  193267 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 22:58:56.504848  193267 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 22:58:56.505328  193267 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-825429 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 22:58:56.549005  193267 kubeadm.go:318] [bootstrap-token] Using token: 7u8re5.dkmizverazog8if9
	I1008 22:58:56.552047  193267 out.go:252]   - Configuring RBAC rules ...
	I1008 22:58:56.552175  193267 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 22:58:56.596777  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 22:58:56.610180  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 22:58:56.623090  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 22:58:56.633877  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 22:58:56.640676  193267 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 22:58:56.840695  193267 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 22:58:57.499787  193267 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 22:58:57.841419  193267 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 22:58:57.842769  193267 kubeadm.go:318] 
	I1008 22:58:57.842848  193267 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 22:58:57.842853  193267 kubeadm.go:318] 
	I1008 22:58:57.842934  193267 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 22:58:57.842939  193267 kubeadm.go:318] 
	I1008 22:58:57.842965  193267 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 22:58:57.843027  193267 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 22:58:57.843081  193267 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 22:58:57.843085  193267 kubeadm.go:318] 
	I1008 22:58:57.843142  193267 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 22:58:57.843146  193267 kubeadm.go:318] 
	I1008 22:58:57.843196  193267 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 22:58:57.843207  193267 kubeadm.go:318] 
	I1008 22:58:57.843262  193267 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 22:58:57.843340  193267 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 22:58:57.843418  193267 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 22:58:57.843423  193267 kubeadm.go:318] 
	I1008 22:58:57.843511  193267 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 22:58:57.843591  193267 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 22:58:57.843595  193267 kubeadm.go:318] 
	I1008 22:58:57.843683  193267 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7u8re5.dkmizverazog8if9 \
	I1008 22:58:57.843803  193267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 22:58:57.843835  193267 kubeadm.go:318] 	--control-plane 
	I1008 22:58:57.843841  193267 kubeadm.go:318] 
	I1008 22:58:57.844246  193267 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 22:58:57.844258  193267 kubeadm.go:318] 
	I1008 22:58:57.844344  193267 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7u8re5.dkmizverazog8if9 \
	I1008 22:58:57.844454  193267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 22:58:57.854425  193267 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:58:57.854766  193267 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:58:57.854928  193267 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:58:57.854971  193267 cni.go:84] Creating CNI manager for ""
	I1008 22:58:57.854994  193267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:57.860300  193267 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 22:58:59.711770  193942 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.523059433s
	I1008 22:59:00.214966  193942 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.02657341s
	I1008 22:58:57.863226  193267 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 22:58:57.868227  193267 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 22:58:57.868257  193267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 22:58:57.923152  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 22:58:58.468793  193267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 22:58:58.468920  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:58:58.469003  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-825429 minikube.k8s.io/updated_at=2025_10_08T22_58_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=embed-certs-825429 minikube.k8s.io/primary=true
	I1008 22:58:58.986172  193267 ops.go:34] apiserver oom_adj: -16
	I1008 22:58:58.986274  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:58:59.486838  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:58:59.987175  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:00.486918  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:00.987120  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:01.487194  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:01.986546  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:02.109491  193267 kubeadm.go:1113] duration metric: took 3.640615382s to wait for elevateKubeSystemPrivileges
	I1008 22:59:02.109520  193267 kubeadm.go:402] duration metric: took 23.3456883s to StartCluster
	I1008 22:59:02.109539  193267 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:02.109603  193267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:59:02.110717  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:02.110960  193267 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:59:02.111074  193267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 22:59:02.111305  193267 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:59:02.111342  193267 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:59:02.111406  193267 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825429"
	I1008 22:59:02.111420  193267 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-825429"
	I1008 22:59:02.111441  193267 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 22:59:02.111993  193267 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825429"
	I1008 22:59:02.112018  193267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825429"
	I1008 22:59:02.112342  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:59:02.112559  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:59:02.114298  193267 out.go:179] * Verifying Kubernetes components...
	I1008 22:59:02.117453  193267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:59:02.159648  193267 addons.go:238] Setting addon default-storageclass=true in "embed-certs-825429"
	I1008 22:59:02.159690  193267 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 22:59:02.160127  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:59:02.175987  193267 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:59:02.190192  193942 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.001664249s
	I1008 22:59:02.246484  193942 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 22:59:02.266160  193942 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 22:59:02.283295  193942 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 22:59:02.283854  193942 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-779490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 22:59:02.300647  193942 kubeadm.go:318] [bootstrap-token] Using token: gg0985.x9u9zh7hb4308wrl
	I1008 22:59:02.303761  193942 out.go:252]   - Configuring RBAC rules ...
	I1008 22:59:02.303892  193942 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 22:59:02.311276  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 22:59:02.326189  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 22:59:02.331151  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 22:59:02.335589  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 22:59:02.340215  193942 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 22:59:02.599157  193942 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 22:59:03.117499  193942 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 22:59:03.604170  193942 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 22:59:03.605203  193942 kubeadm.go:318] 
	I1008 22:59:03.605274  193942 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 22:59:03.605280  193942 kubeadm.go:318] 
	I1008 22:59:03.605367  193942 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 22:59:03.605373  193942 kubeadm.go:318] 
	I1008 22:59:03.605399  193942 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 22:59:03.605460  193942 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 22:59:03.605514  193942 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 22:59:03.605519  193942 kubeadm.go:318] 
	I1008 22:59:03.605575  193942 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 22:59:03.605579  193942 kubeadm.go:318] 
	I1008 22:59:03.605645  193942 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 22:59:03.605652  193942 kubeadm.go:318] 
	I1008 22:59:03.605707  193942 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 22:59:03.605784  193942 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 22:59:03.605864  193942 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 22:59:03.605869  193942 kubeadm.go:318] 
	I1008 22:59:03.605957  193942 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 22:59:03.606036  193942 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 22:59:03.606041  193942 kubeadm.go:318] 
	I1008 22:59:03.606132  193942 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token gg0985.x9u9zh7hb4308wrl \
	I1008 22:59:03.606240  193942 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 22:59:03.606261  193942 kubeadm.go:318] 	--control-plane 
	I1008 22:59:03.606266  193942 kubeadm.go:318] 
	I1008 22:59:03.606354  193942 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 22:59:03.606358  193942 kubeadm.go:318] 
	I1008 22:59:03.606711  193942 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token gg0985.x9u9zh7hb4308wrl \
	I1008 22:59:03.606824  193942 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 22:59:03.620714  193942 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:59:03.620977  193942 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:59:03.621094  193942 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:59:03.621178  193942 cni.go:84] Creating CNI manager for ""
	I1008 22:59:03.621189  193942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:59:03.624414  193942 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 22:59:02.181880  193267 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:02.181909  193267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:59:02.181979  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:59:02.200394  193267 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:02.200416  193267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:59:02.200484  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:59:02.237298  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:59:02.248302  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:59:02.494274  193267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 22:59:02.532363  193267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:59:02.573289  193267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:02.639027  193267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:03.593401  193267 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.099087689s)
	I1008 22:59:03.593438  193267 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1008 22:59:03.593769  193267 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.061370571s)
	I1008 22:59:03.594652  193267 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825429" to be "Ready" ...
	I1008 22:59:04.067779  193267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.494449541s)
	I1008 22:59:04.067853  193267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.42878533s)
	I1008 22:59:04.106440  193267 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 22:59:03.627845  193942 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 22:59:03.632743  193942 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 22:59:03.632762  193942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 22:59:03.652673  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 22:59:04.158896  193942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 22:59:04.159031  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:04.159103  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-779490 minikube.k8s.io/updated_at=2025_10_08T22_59_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=default-k8s-diff-port-779490 minikube.k8s.io/primary=true
	I1008 22:59:04.363987  193942 ops.go:34] apiserver oom_adj: -16
	I1008 22:59:04.364093  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:04.864801  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:05.364615  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:05.864352  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:04.109468  193267 addons.go:514] duration metric: took 1.998089049s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 22:59:04.111508  193267 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-825429" context rescaled to 1 replicas
	W1008 22:59:05.597374  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	I1008 22:59:06.364695  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:06.864463  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:07.364932  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:07.864697  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:08.000229  193942 kubeadm.go:1113] duration metric: took 3.841240748s to wait for elevateKubeSystemPrivileges
	I1008 22:59:08.000266  193942 kubeadm.go:402] duration metric: took 25.059877981s to StartCluster
	I1008 22:59:08.000284  193942 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:08.000348  193942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:59:08.002240  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:08.002577  193942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 22:59:08.003101  193942 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:59:08.003187  193942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:59:08.003220  193942 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:59:08.003647  193942 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-779490"
	I1008 22:59:08.003668  193942 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-779490"
	I1008 22:59:08.003694  193942 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 22:59:08.004032  193942 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-779490"
	I1008 22:59:08.004099  193942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-779490"
	I1008 22:59:08.004222  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:59:08.004505  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:59:08.007530  193942 out.go:179] * Verifying Kubernetes components...
	I1008 22:59:08.012642  193942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:59:08.049795  193942 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-779490"
	I1008 22:59:08.049839  193942 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 22:59:08.050011  193942 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:59:08.050276  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:59:08.053726  193942 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:08.053752  193942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:59:08.053819  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:59:08.082215  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:59:08.088803  193942 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:08.088824  193942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:59:08.088892  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:59:08.125403  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:59:08.271936  193942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:08.338609  193942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 22:59:08.339590  193942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:08.364911  193942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:59:09.182616  193942 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1008 22:59:09.184995  193942 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 22:59:09.238888  193942 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 22:59:09.242039  193942 addons.go:514] duration metric: took 1.238799643s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 22:59:09.686761  193942 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-779490" context rescaled to 1 replicas
	W1008 22:59:07.598451  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:10.097872  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:11.187947  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:13.188676  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:15.188799  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:12.098534  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:14.598137  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:17.688134  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:19.688529  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:17.097964  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:19.098348  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:21.597786  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:21.689711  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:24.188693  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:24.098301  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:26.598323  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:26.688435  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:28.688525  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:30.688650  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:29.098120  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:31.098760  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:33.188474  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:35.688011  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:33.598283  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:36.097755  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:37.688535  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:40.188547  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:38.597980  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:41.097407  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:42.189118  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:44.688444  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:43.098653  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	I1008 22:59:45.597837  193267 node_ready.go:49] node "embed-certs-825429" is "Ready"
	I1008 22:59:45.597867  193267 node_ready.go:38] duration metric: took 42.003157205s for node "embed-certs-825429" to be "Ready" ...
	I1008 22:59:45.597881  193267 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:59:45.597975  193267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:59:45.610225  193267 api_server.go:72] duration metric: took 43.49922909s to wait for apiserver process to appear ...
	I1008 22:59:45.610251  193267 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:59:45.610270  193267 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 22:59:45.619170  193267 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 22:59:45.620217  193267 api_server.go:141] control plane version: v1.34.1
	I1008 22:59:45.620240  193267 api_server.go:131] duration metric: took 9.981516ms to wait for apiserver health ...
	I1008 22:59:45.620250  193267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:59:45.623500  193267 system_pods.go:59] 8 kube-system pods found
	I1008 22:59:45.623540  193267 system_pods.go:61] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:45.623547  193267 system_pods.go:61] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:45.623553  193267 system_pods.go:61] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:45.623559  193267 system_pods.go:61] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:45.623564  193267 system_pods.go:61] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:45.623568  193267 system_pods.go:61] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:45.623573  193267 system_pods.go:61] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:45.623583  193267 system_pods.go:61] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:45.623589  193267 system_pods.go:74] duration metric: took 3.333339ms to wait for pod list to return data ...
	I1008 22:59:45.623602  193267 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:59:45.626757  193267 default_sa.go:45] found service account: "default"
	I1008 22:59:45.626791  193267 default_sa.go:55] duration metric: took 3.1741ms for default service account to be created ...
	I1008 22:59:45.626802  193267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:59:45.630100  193267 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:45.630134  193267 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:45.630141  193267 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:45.630148  193267 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:45.630152  193267 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:45.630157  193267 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:45.630163  193267 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:45.630174  193267 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:45.630180  193267 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:45.630213  193267 retry.go:31] will retry after 253.798696ms: missing components: kube-dns
	I1008 22:59:45.916124  193267 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:45.916161  193267 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:45.916169  193267 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:45.916175  193267 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:45.916179  193267 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:45.916184  193267 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:45.916188  193267 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:45.916193  193267 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:45.916202  193267 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:45.916218  193267 retry.go:31] will retry after 290.004825ms: missing components: kube-dns
	I1008 22:59:46.211080  193267 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:46.211119  193267 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:46.211127  193267 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:46.211133  193267 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:46.211138  193267 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:46.211143  193267 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:46.211147  193267 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:46.211151  193267 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:46.211165  193267 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 22:59:46.211177  193267 system_pods.go:126] duration metric: took 584.369269ms to wait for k8s-apps to be running ...
	I1008 22:59:46.211190  193267 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:59:46.211285  193267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:59:46.227658  193267 system_svc.go:56] duration metric: took 16.451548ms WaitForService to wait for kubelet
	I1008 22:59:46.227688  193267 kubeadm.go:586] duration metric: took 44.116696946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:59:46.227714  193267 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:59:46.231337  193267 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:59:46.231380  193267 node_conditions.go:123] node cpu capacity is 2
	I1008 22:59:46.231393  193267 node_conditions.go:105] duration metric: took 3.668227ms to run NodePressure ...
	I1008 22:59:46.231406  193267 start.go:241] waiting for startup goroutines ...
	I1008 22:59:46.231413  193267 start.go:246] waiting for cluster config update ...
	I1008 22:59:46.231424  193267 start.go:255] writing updated cluster config ...
	I1008 22:59:46.231716  193267 ssh_runner.go:195] Run: rm -f paused
	I1008 22:59:46.235559  193267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:46.239521  193267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.251212  193267 pod_ready.go:94] pod "coredns-66bc5c9577-s7kcb" is "Ready"
	I1008 22:59:47.251241  193267 pod_ready.go:86] duration metric: took 1.011692139s for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.254064  193267 pod_ready.go:83] waiting for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.259151  193267 pod_ready.go:94] pod "etcd-embed-certs-825429" is "Ready"
	I1008 22:59:47.259198  193267 pod_ready.go:86] duration metric: took 5.107579ms for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.261988  193267 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.267240  193267 pod_ready.go:94] pod "kube-apiserver-embed-certs-825429" is "Ready"
	I1008 22:59:47.267269  193267 pod_ready.go:86] duration metric: took 5.253944ms for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.269959  193267 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.443116  193267 pod_ready.go:94] pod "kube-controller-manager-embed-certs-825429" is "Ready"
	I1008 22:59:47.443144  193267 pod_ready.go:86] duration metric: took 173.158605ms for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.643484  193267 pod_ready.go:83] waiting for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.043576  193267 pod_ready.go:94] pod "kube-proxy-86wtc" is "Ready"
	I1008 22:59:48.043653  193267 pod_ready.go:86] duration metric: took 400.142079ms for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.242901  193267 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.643357  193267 pod_ready.go:94] pod "kube-scheduler-embed-certs-825429" is "Ready"
	I1008 22:59:48.643392  193267 pod_ready.go:86] duration metric: took 400.45574ms for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.643405  193267 pod_ready.go:40] duration metric: took 2.407814607s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:48.705771  193267 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:59:48.709138  193267 out.go:179] * Done! kubectl is now configured to use "embed-certs-825429" cluster and "default" namespace by default
	W1008 22:59:46.694731  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:49.188688  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	I1008 22:59:50.188551  193942 node_ready.go:49] node "default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:50.188583  193942 node_ready.go:38] duration metric: took 41.00355039s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 22:59:50.188597  193942 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:59:50.188655  193942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:59:50.201942  193942 api_server.go:72] duration metric: took 42.198420099s to wait for apiserver process to appear ...
	I1008 22:59:50.201964  193942 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:59:50.201984  193942 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 22:59:50.211411  193942 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1008 22:59:50.213114  193942 api_server.go:141] control plane version: v1.34.1
	I1008 22:59:50.213137  193942 api_server.go:131] duration metric: took 11.166629ms to wait for apiserver health ...
	I1008 22:59:50.213146  193942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:59:50.224131  193942 system_pods.go:59] 8 kube-system pods found
	I1008 22:59:50.224162  193942 system_pods.go:61] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:50.224169  193942 system_pods.go:61] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.224175  193942 system_pods.go:61] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.224180  193942 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.224184  193942 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.224189  193942 system_pods.go:61] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.224193  193942 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.224199  193942 system_pods.go:61] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:50.224205  193942 system_pods.go:74] duration metric: took 11.053348ms to wait for pod list to return data ...
	I1008 22:59:50.224212  193942 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:59:50.239050  193942 default_sa.go:45] found service account: "default"
	I1008 22:59:50.239073  193942 default_sa.go:55] duration metric: took 14.855271ms for default service account to be created ...
	I1008 22:59:50.239083  193942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:59:50.244512  193942 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:50.244542  193942 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:50.244548  193942 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.244555  193942 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.244559  193942 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.244563  193942 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.244569  193942 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.244573  193942 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.244579  193942 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:50.244599  193942 retry.go:31] will retry after 210.842736ms: missing components: kube-dns
	I1008 22:59:50.460342  193942 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:50.460372  193942 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:50.460379  193942 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.460386  193942 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.460391  193942 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.460396  193942 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.460400  193942 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.460404  193942 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.460409  193942 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:50.460436  193942 retry.go:31] will retry after 288.668809ms: missing components: kube-dns
	I1008 22:59:50.753537  193942 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:50.753569  193942 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running
	I1008 22:59:50.753577  193942 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.753587  193942 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.753592  193942 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.753597  193942 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.753601  193942 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.753605  193942 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.753609  193942 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 22:59:50.753617  193942 system_pods.go:126] duration metric: took 514.5286ms to wait for k8s-apps to be running ...
	I1008 22:59:50.753625  193942 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:59:50.753720  193942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:59:50.766864  193942 system_svc.go:56] duration metric: took 13.229128ms WaitForService to wait for kubelet
	I1008 22:59:50.766894  193942 kubeadm.go:586] duration metric: took 42.763378089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:59:50.766913  193942 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:59:50.770047  193942 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:59:50.770076  193942 node_conditions.go:123] node cpu capacity is 2
	I1008 22:59:50.770088  193942 node_conditions.go:105] duration metric: took 3.169136ms to run NodePressure ...
	I1008 22:59:50.770101  193942 start.go:241] waiting for startup goroutines ...
	I1008 22:59:50.770109  193942 start.go:246] waiting for cluster config update ...
	I1008 22:59:50.770124  193942 start.go:255] writing updated cluster config ...
	I1008 22:59:50.770409  193942 ssh_runner.go:195] Run: rm -f paused
	I1008 22:59:50.774185  193942 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:50.778186  193942 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.783430  193942 pod_ready.go:94] pod "coredns-66bc5c9577-9xx2v" is "Ready"
	I1008 22:59:50.783509  193942 pod_ready.go:86] duration metric: took 5.295554ms for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.785842  193942 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.790504  193942 pod_ready.go:94] pod "etcd-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:50.790529  193942 pod_ready.go:86] duration metric: took 4.664391ms for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.793395  193942 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.798845  193942 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:50.798875  193942 pod_ready.go:86] duration metric: took 5.40753ms for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.801561  193942 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.178891  193942 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:51.178965  193942 pod_ready.go:86] duration metric: took 377.377505ms for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.381307  193942 pod_ready.go:83] waiting for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.778918  193942 pod_ready.go:94] pod "kube-proxy-jrvxc" is "Ready"
	I1008 22:59:51.778946  193942 pod_ready.go:86] duration metric: took 397.611153ms for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.979208  193942 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:52.378345  193942 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:52.378373  193942 pod_ready.go:86] duration metric: took 399.13808ms for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:52.378386  193942 pod_ready.go:40] duration metric: took 1.604168122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:52.434097  193942 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:59:52.437557  193942 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-779490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:59:45 embed-certs-825429 crio[834]: time="2025-10-08T22:59:45.86879389Z" level=info msg="Created container a15a644adc20f87f8bbd15df407614d8078ba84305ab6c6b6f55a1ac0655a31a: kube-system/coredns-66bc5c9577-s7kcb/coredns" id=6934f77e-975b-4503-bf41-e7195cc926c8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:59:45 embed-certs-825429 crio[834]: time="2025-10-08T22:59:45.869536135Z" level=info msg="Starting container: a15a644adc20f87f8bbd15df407614d8078ba84305ab6c6b6f55a1ac0655a31a" id=7785c61a-1b23-4cee-aa4c-86a9b793f428 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:59:45 embed-certs-825429 crio[834]: time="2025-10-08T22:59:45.878554029Z" level=info msg="Started container" PID=1707 containerID=a15a644adc20f87f8bbd15df407614d8078ba84305ab6c6b6f55a1ac0655a31a description=kube-system/coredns-66bc5c9577-s7kcb/coredns id=7785c61a-1b23-4cee-aa4c-86a9b793f428 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d95fe4b3b299d6fe564e0afbe287dcccdf3ca6c72b1c45ba33d4538e47f637fb
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.223599128Z" level=info msg="Running pod sandbox: default/busybox/POD" id=82e23a9b-ff6c-4e9a-9c09-00ba2b291cde name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.223674575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.228695557Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:02eedbda6800896fe8451f6973a5842f635f47fc2df036a5bb8a4b50c689e0c5 UID:030c16a7-3c27-4d5e-868d-923d85baa808 NetNS:/var/run/netns/1abb3062-1a3b-498d-95b2-526a08528e8d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000120a90}] Aliases:map[]}"
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.228866398Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.239950367Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:02eedbda6800896fe8451f6973a5842f635f47fc2df036a5bb8a4b50c689e0c5 UID:030c16a7-3c27-4d5e-868d-923d85baa808 NetNS:/var/run/netns/1abb3062-1a3b-498d-95b2-526a08528e8d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000120a90}] Aliases:map[]}"
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.240282835Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.244999821Z" level=info msg="Ran pod sandbox 02eedbda6800896fe8451f6973a5842f635f47fc2df036a5bb8a4b50c689e0c5 with infra container: default/busybox/POD" id=82e23a9b-ff6c-4e9a-9c09-00ba2b291cde name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.247699955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=07590b70-80b6-4e57-9284-359649f5e14d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.247875958Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=07590b70-80b6-4e57-9284-359649f5e14d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.247940032Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=07590b70-80b6-4e57-9284-359649f5e14d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.250121645Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f732fc1-b346-4658-b6d1-92db4d8f27a2 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:59:49 embed-certs-825429 crio[834]: time="2025-10-08T22:59:49.253208426Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.270158586Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=1f732fc1-b346-4658-b6d1-92db4d8f27a2 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.270851041Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d3d3df81-9271-467e-831c-942b536cd740 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.272746103Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3773b90e-9787-4182-98f9-0f4ac8f3ffb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.280689343Z" level=info msg="Creating container: default/busybox/busybox" id=45a25855-daf2-4f5b-8059-48e45968b372 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.281600476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.301167203Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.301702668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.317051803Z" level=info msg="Created container d314cb09894e379a6c696e54e8e5de9c7ecc6c3e24420934ab3e0a072c83cf99: default/busybox/busybox" id=45a25855-daf2-4f5b-8059-48e45968b372 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.318103803Z" level=info msg="Starting container: d314cb09894e379a6c696e54e8e5de9c7ecc6c3e24420934ab3e0a072c83cf99" id=06005327-0270-4f1a-9e50-82e46be1a58d name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:59:51 embed-certs-825429 crio[834]: time="2025-10-08T22:59:51.323189434Z" level=info msg="Started container" PID=1759 containerID=d314cb09894e379a6c696e54e8e5de9c7ecc6c3e24420934ab3e0a072c83cf99 description=default/busybox/busybox id=06005327-0270-4f1a-9e50-82e46be1a58d name=/runtime.v1.RuntimeService/StartContainer sandboxID=02eedbda6800896fe8451f6973a5842f635f47fc2df036a5bb8a4b50c689e0c5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	d314cb09894e3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   10 seconds ago       Running             busybox                   0                   02eedbda68008       busybox                                      default
	a15a644adc20f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago       Running             coredns                   0                   d95fe4b3b299d       coredns-66bc5c9577-s7kcb                     kube-system
	37e93d5115417       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago       Running             storage-provisioner       0                   345c930f0993f       storage-provisioner                          kube-system
	e670dc1868267       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   2d5222635690b       kindnet-kjmsw                                kube-system
	d12ccdd18d554       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   47ad06e05ba68       kube-proxy-86wtc                             kube-system
	cac24998976d4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   5a18084674137       kube-apiserver-embed-certs-825429            kube-system
	6ddd555f42c90       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   683ca10ef72aa       etcd-embed-certs-825429                      kube-system
	7c50790f8bcf0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   83eccb4aa08f6       kube-controller-manager-embed-certs-825429   kube-system
	5b7f45ca5accd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   781bcba840e3c       kube-scheduler-embed-certs-825429            kube-system
	
	
	==> coredns [a15a644adc20f87f8bbd15df407614d8078ba84305ab6c6b6f55a1ac0655a31a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56154 - 7710 "HINFO IN 7592618350648362609.6410052693411423685. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023791556s
	
	
	==> describe nodes <==
	Name:               embed-certs-825429
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-825429
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=embed-certs-825429
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_58_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:58:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-825429
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:59:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:59:59 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:59:59 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:59:59 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:59:59 +0000   Wed, 08 Oct 2025 22:59:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-825429
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a32ed9ac30cd4ead8b14c0444a8d5224
	  System UUID:                9bcebe6b-6a1d-4fec-b0e0-57daefae99b1
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-s7kcb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     60s
	  kube-system                 etcd-embed-certs-825429                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         66s
	  kube-system                 kindnet-kjmsw                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      60s
	  kube-system                 kube-apiserver-embed-certs-825429             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-embed-certs-825429    200m (10%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-86wtc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-scheduler-embed-certs-825429             100m (5%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Warning  CgroupV1                 76s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 76s)  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 76s)  kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x8 over 76s)  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   Starting                 65s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s                kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s                kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s                kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                node-controller  Node embed-certs-825429 event: Registered Node embed-certs-825429 in Controller
	  Normal   NodeReady                17s                kubelet          Node embed-certs-825429 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6ddd555f42c905ccae2d293c81d1e0bb385f276fb35975e8557c7d27f5370458] <==
	{"level":"warn","ts":"2025-10-08T22:58:52.238400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.266471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.290916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.327281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.342172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.363694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.378399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.395139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.414002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.447179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.477799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.487685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.504894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.550162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.593367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.636501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.666303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.706519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.728873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.767531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.811710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.844830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.874773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:52.912504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:53.151002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51936","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:00:02 up  1:42,  0 user,  load average: 1.94, 1.73, 1.74
	Linux embed-certs-825429 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e670dc18682671960317cafb2643fe5466f139e72a67e90d954d66f2ea1ae64d] <==
	I1008 22:59:05.105466       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:59:05.106022       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1008 22:59:05.106207       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:59:05.106230       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:59:05.106246       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:59:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:59:05.310697       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:59:05.310723       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:59:05.310731       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:59:05.311487       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:59:35.311514       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:59:35.311735       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 22:59:35.311872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:59:35.312805       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 22:59:36.811039       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:59:36.811074       1 metrics.go:72] Registering metrics
	I1008 22:59:36.811149       1 controller.go:711] "Syncing nftables rules"
	I1008 22:59:45.317130       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:59:45.317311       1 main.go:301] handling current node
	I1008 22:59:55.311808       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 22:59:55.311854       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cac24998976d4a1759dabf9c9b5289901b36c06347357ec53863b81725d8bc40] <==
	I1008 22:58:54.647462       1 cache.go:39] Caches are synced for autoregister controller
	I1008 22:58:54.648145       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 22:58:54.718589       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:58:54.718710       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1008 22:58:54.747106       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:58:54.771174       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:58:54.771804       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 22:58:55.203332       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1008 22:58:55.221303       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1008 22:58:55.221333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:58:56.220479       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:58:56.291450       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:58:56.447472       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 22:58:56.461195       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1008 22:58:56.462168       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:58:56.471452       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:58:56.554709       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:58:57.453166       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 22:58:57.495045       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 22:58:57.579970       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 22:59:02.354732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 22:59:02.499927       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:59:02.530033       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:59:02.559677       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1008 22:59:59.054042       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:49866: use of closed network connection
	
	
	==> kube-controller-manager [7c50790f8bcf0e0d297c5424e242c28710410daf554b528969faa0f03732cfd7] <==
	I1008 22:59:01.665142       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 22:59:01.667369       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1008 22:59:01.667420       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1008 22:59:01.667433       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1008 22:59:01.667440       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1008 22:59:01.669543       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 22:59:01.674323       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1008 22:59:01.677703       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 22:59:01.684529       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 22:59:01.684860       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-825429" podCIDRs=["10.244.0.0/24"]
	I1008 22:59:01.684933       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:59:01.685036       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:59:01.685126       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-825429"
	I1008 22:59:01.685198       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 22:59:01.685439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:59:01.685456       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:59:01.685463       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:59:01.685965       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 22:59:01.693909       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 22:59:01.694059       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 22:59:01.694139       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 22:59:01.696434       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 22:59:01.698698       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 22:59:01.700363       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:59:46.693307       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d12ccdd18d5543af929c3d3646f247054d2fe76e8e7514de5470de97ff996f28] <==
	I1008 22:59:04.973509       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:59:05.055159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:59:05.155486       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:59:05.155591       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1008 22:59:05.155708       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:59:05.178568       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:59:05.178632       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:59:05.182828       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:59:05.183166       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:59:05.183198       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:59:05.184576       1 config.go:200] "Starting service config controller"
	I1008 22:59:05.184597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:59:05.184614       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:59:05.184619       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:59:05.184631       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:59:05.184635       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:59:05.185486       1 config.go:309] "Starting node config controller"
	I1008 22:59:05.185506       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:59:05.185513       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:59:05.285795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 22:59:05.285907       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:59:05.285939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5b7f45ca5accde97290c366a8b0ca50ad4493966f167f725cb76618e3542d2ef] <==
	E1008 22:58:54.648213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 22:58:54.648328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 22:58:54.657956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 22:58:54.658154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 22:58:54.658249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 22:58:54.658339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:58:54.658423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 22:58:54.658511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 22:58:54.664676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 22:58:54.664868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 22:58:54.665005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 22:58:54.665157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 22:58:54.665306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 22:58:54.665412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:58:55.564811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 22:58:55.661046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 22:58:55.691061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 22:58:55.697076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:58:55.727848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 22:58:55.778551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:58:55.822122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 22:58:55.823898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 22:58:55.831232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 22:58:56.046625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1008 22:58:58.868230       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 22:59:02 embed-certs-825429 kubelet[1289]: I1008 22:59:02.841661    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ccf3390-491f-4ac1-abd7-15bed7e0fdc3-kube-proxy\") pod \"kube-proxy-86wtc\" (UID: \"3ccf3390-491f-4ac1-abd7-15bed7e0fdc3\") " pod="kube-system/kube-proxy-86wtc"
	Oct 08 22:59:02 embed-certs-825429 kubelet[1289]: I1008 22:59:02.841682    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb5b265b-7be1-4870-af88-23dfe38926c9-xtables-lock\") pod \"kindnet-kjmsw\" (UID: \"eb5b265b-7be1-4870-af88-23dfe38926c9\") " pod="kube-system/kindnet-kjmsw"
	Oct 08 22:59:02 embed-certs-825429 kubelet[1289]: I1008 22:59:02.841702    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tw7d\" (UniqueName: \"kubernetes.io/projected/3ccf3390-491f-4ac1-abd7-15bed7e0fdc3-kube-api-access-9tw7d\") pod \"kube-proxy-86wtc\" (UID: \"3ccf3390-491f-4ac1-abd7-15bed7e0fdc3\") " pod="kube-system/kube-proxy-86wtc"
	Oct 08 22:59:02 embed-certs-825429 kubelet[1289]: I1008 22:59:02.841736    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4b4\" (UniqueName: \"kubernetes.io/projected/eb5b265b-7be1-4870-af88-23dfe38926c9-kube-api-access-wr4b4\") pod \"kindnet-kjmsw\" (UID: \"eb5b265b-7be1-4870-af88-23dfe38926c9\") " pod="kube-system/kindnet-kjmsw"
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: E1008 22:59:04.014294    1289 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: E1008 22:59:04.014373    1289 projected.go:196] Error preparing data for projected volume kube-api-access-9tw7d for pod kube-system/kube-proxy-86wtc: failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: E1008 22:59:04.014464    1289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3ccf3390-491f-4ac1-abd7-15bed7e0fdc3-kube-api-access-9tw7d podName:3ccf3390-491f-4ac1-abd7-15bed7e0fdc3 nodeName:}" failed. No retries permitted until 2025-10-08 22:59:04.514436956 +0000 UTC m=+7.248286561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9tw7d" (UniqueName: "kubernetes.io/projected/3ccf3390-491f-4ac1-abd7-15bed7e0fdc3-kube-api-access-9tw7d") pod "kube-proxy-86wtc" (UID: "3ccf3390-491f-4ac1-abd7-15bed7e0fdc3") : failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: E1008 22:59:04.050315    1289 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: E1008 22:59:04.050351    1289 projected.go:196] Error preparing data for projected volume kube-api-access-wr4b4 for pod kube-system/kindnet-kjmsw: failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: E1008 22:59:04.050421    1289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5b265b-7be1-4870-af88-23dfe38926c9-kube-api-access-wr4b4 podName:eb5b265b-7be1-4870-af88-23dfe38926c9 nodeName:}" failed. No retries permitted until 2025-10-08 22:59:04.550390533 +0000 UTC m=+7.284240130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wr4b4" (UniqueName: "kubernetes.io/projected/eb5b265b-7be1-4870-af88-23dfe38926c9-kube-api-access-wr4b4") pod "kindnet-kjmsw" (UID: "eb5b265b-7be1-4870-af88-23dfe38926c9") : failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: I1008 22:59:04.567142    1289 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 08 22:59:04 embed-certs-825429 kubelet[1289]: W1008 22:59:04.861709    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/crio-2d5222635690b249305b4eabd5588d3b715e2d26ca34e1735b9d9ed8ae416522 WatchSource:0}: Error finding container 2d5222635690b249305b4eabd5588d3b715e2d26ca34e1735b9d9ed8ae416522: Status 404 returned error can't find the container with id 2d5222635690b249305b4eabd5588d3b715e2d26ca34e1735b9d9ed8ae416522
	Oct 08 22:59:05 embed-certs-825429 kubelet[1289]: I1008 22:59:05.820240    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kjmsw" podStartSLOduration=3.820222119 podStartE2EDuration="3.820222119s" podCreationTimestamp="2025-10-08 22:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:05.820004476 +0000 UTC m=+8.553854089" watchObservedRunningTime="2025-10-08 22:59:05.820222119 +0000 UTC m=+8.554071757"
	Oct 08 22:59:06 embed-certs-825429 kubelet[1289]: I1008 22:59:06.331619    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-86wtc" podStartSLOduration=4.331598449 podStartE2EDuration="4.331598449s" podCreationTimestamp="2025-10-08 22:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:05.834753579 +0000 UTC m=+8.568603184" watchObservedRunningTime="2025-10-08 22:59:06.331598449 +0000 UTC m=+9.065448054"
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: I1008 22:59:45.418270    1289 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: I1008 22:59:45.573293    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsjb\" (UniqueName: \"kubernetes.io/projected/5656ffce-aa1a-4e17-9d19-a3a2eeeba35f-kube-api-access-mdsjb\") pod \"coredns-66bc5c9577-s7kcb\" (UID: \"5656ffce-aa1a-4e17-9d19-a3a2eeeba35f\") " pod="kube-system/coredns-66bc5c9577-s7kcb"
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: I1008 22:59:45.573352    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ccb25fa2-fa55-465c-9fcc-194f56db4ad4-tmp\") pod \"storage-provisioner\" (UID: \"ccb25fa2-fa55-465c-9fcc-194f56db4ad4\") " pod="kube-system/storage-provisioner"
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: I1008 22:59:45.573375    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqnz9\" (UniqueName: \"kubernetes.io/projected/ccb25fa2-fa55-465c-9fcc-194f56db4ad4-kube-api-access-lqnz9\") pod \"storage-provisioner\" (UID: \"ccb25fa2-fa55-465c-9fcc-194f56db4ad4\") " pod="kube-system/storage-provisioner"
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: I1008 22:59:45.573393    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5656ffce-aa1a-4e17-9d19-a3a2eeeba35f-config-volume\") pod \"coredns-66bc5c9577-s7kcb\" (UID: \"5656ffce-aa1a-4e17-9d19-a3a2eeeba35f\") " pod="kube-system/coredns-66bc5c9577-s7kcb"
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: W1008 22:59:45.779211    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/crio-345c930f0993f06045aa92cb909470c5c40a56d95d52d7ecf5b498dca1735476 WatchSource:0}: Error finding container 345c930f0993f06045aa92cb909470c5c40a56d95d52d7ecf5b498dca1735476: Status 404 returned error can't find the container with id 345c930f0993f06045aa92cb909470c5c40a56d95d52d7ecf5b498dca1735476
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: W1008 22:59:45.807069    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/crio-d95fe4b3b299d6fe564e0afbe287dcccdf3ca6c72b1c45ba33d4538e47f637fb WatchSource:0}: Error finding container d95fe4b3b299d6fe564e0afbe287dcccdf3ca6c72b1c45ba33d4538e47f637fb: Status 404 returned error can't find the container with id d95fe4b3b299d6fe564e0afbe287dcccdf3ca6c72b1c45ba33d4538e47f637fb
	Oct 08 22:59:45 embed-certs-825429 kubelet[1289]: I1008 22:59:45.954447    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.954427377 podStartE2EDuration="41.954427377s" podCreationTimestamp="2025-10-08 22:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:45.926290498 +0000 UTC m=+48.660140119" watchObservedRunningTime="2025-10-08 22:59:45.954427377 +0000 UTC m=+48.688276973"
	Oct 08 22:59:46 embed-certs-825429 kubelet[1289]: I1008 22:59:46.914568    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-s7kcb" podStartSLOduration=44.914547358 podStartE2EDuration="44.914547358s" podCreationTimestamp="2025-10-08 22:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:45.955179213 +0000 UTC m=+48.689028818" watchObservedRunningTime="2025-10-08 22:59:46.914547358 +0000 UTC m=+49.648396955"
	Oct 08 22:59:48 embed-certs-825429 kubelet[1289]: I1008 22:59:48.996105    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jg9\" (UniqueName: \"kubernetes.io/projected/030c16a7-3c27-4d5e-868d-923d85baa808-kube-api-access-l6jg9\") pod \"busybox\" (UID: \"030c16a7-3c27-4d5e-868d-923d85baa808\") " pod="default/busybox"
	Oct 08 22:59:59 embed-certs-825429 kubelet[1289]: E1008 22:59:59.054507    1289 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48354->127.0.0.1:34315: write tcp 127.0.0.1:48354->127.0.0.1:34315: write: broken pipe
	
	
	==> storage-provisioner [37e93d51154170e0075a1dfec9e9eba4dcf150ac5283a1fd6edb0712d96d6763] <==
	I1008 22:59:45.866158       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 22:59:45.872552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:45.885593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:59:45.885791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:59:45.885975       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-825429_c93e5964-d918-4dea-9099-bea536c9d4c2!
	I1008 22:59:45.886881       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"deb8d6fa-4d23-4078-b8a3-474c7c204563", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-825429_c93e5964-d918-4dea-9099-bea536c9d4c2 became leader
	W1008 22:59:45.918618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:45.940256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:59:45.986482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-825429_c93e5964-d918-4dea-9099-bea536c9d4c2!
	W1008 22:59:47.944637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:47.949462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:49.953009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:49.960298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:51.964247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:51.969163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:53.972805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:53.980305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:55.984344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:55.989016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:58.009853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:58.058653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:00.067267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:00.125852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:02.130173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:02.137625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825429 -n embed-certs-825429
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-825429 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (611.820899ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-779490 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-779490 describe deploy/metrics-server -n kube-system: exit status 1 (141.117607ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-779490 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-779490
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-779490:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca",
	        "Created": "2025-10-08T22:58:32.369538297Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 195050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T22:58:32.441146125Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/hosts",
	        "LogPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca-json.log",
	        "Name": "/default-k8s-diff-port-779490",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-779490:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-779490",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca",
	                "LowerDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-779490",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-779490/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-779490",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-779490",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-779490",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "12711d8f0d972891bb2198d776c3a3ce3ed1b38dde43ea601293c138becaf1a2",
	            "SandboxKey": "/var/run/docker/netns/12711d8f0d97",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-779490": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:22:2e:45:c0:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95a85807530ab25d32d1815ee29ce3fc904bd88d88973d6a88e562431efd0d87",
	                    "EndpointID": "003a46aa974b8b5545a20ad83a1c96d36b25481b6ff1e9fb6a527deb30b23340",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-779490",
	                        "74faf5bf01ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-779490 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-779490 logs -n 25: (1.786910775s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:53 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-110407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │                     │
	│ stop    │ -p old-k8s-version-110407 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:54 UTC │
	│ start   │ -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:54 UTC │ 08 Oct 25 22:55 UTC │
	│ image   │ old-k8s-version-110407 image list --format=json                                                                                                                                                                                               │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ pause   │ -p old-k8s-version-110407 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │                     │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ delete  │ -p old-k8s-version-110407                                                                                                                                                                                                                     │ old-k8s-version-110407       │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:55 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:55 UTC │ 08 Oct 25 22:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                                                                                                    │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                                                                                                  │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                                                                                          │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                                                                                          │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-036919                                                                                                                                                                                                               │ disable-driver-mounts-036919 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 22:58:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 22:58:25.990357  193942 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:58:25.990578  193942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:58:25.990605  193942 out.go:374] Setting ErrFile to fd 2...
	I1008 22:58:25.990630  193942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:58:25.990927  193942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:58:25.991391  193942 out.go:368] Setting JSON to false
	I1008 22:58:25.992267  193942 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6056,"bootTime":1759958250,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:58:25.992367  193942 start.go:141] virtualization:  
	I1008 22:58:25.995312  193942 out.go:179] * [default-k8s-diff-port-779490] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:58:25.997266  193942 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:58:25.997336  193942 notify.go:220] Checking for updates...
	I1008 22:58:26.002904  193942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:58:26.004374  193942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:58:26.014387  193942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:58:26.017035  193942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:58:26.018639  193942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:58:26.020743  193942 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:58:26.020929  193942 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:58:26.065098  193942 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:58:26.065244  193942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:58:26.146197  193942 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-08 22:58:26.134768696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:58:26.146306  193942 docker.go:318] overlay module found
	I1008 22:58:26.148990  193942 out.go:179] * Using the docker driver based on user configuration
	I1008 22:58:26.150220  193942 start.go:305] selected driver: docker
	I1008 22:58:26.150240  193942 start.go:925] validating driver "docker" against <nil>
	I1008 22:58:26.150254  193942 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:58:26.151017  193942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:58:26.229160  193942 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2025-10-08 22:58:26.219012889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:58:26.229547  193942 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 22:58:26.230113  193942 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:58:26.232412  193942 out.go:179] * Using Docker driver with root privileges
	I1008 22:58:26.234018  193942 cni.go:84] Creating CNI manager for ""
	I1008 22:58:26.234095  193942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:26.234110  193942 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 22:58:26.234188  193942 start.go:349] cluster config:
	{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:58:26.236351  193942 out.go:179] * Starting "default-k8s-diff-port-779490" primary control-plane node in "default-k8s-diff-port-779490" cluster
	I1008 22:58:26.237799  193942 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 22:58:26.239088  193942 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 22:58:26.240366  193942 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:26.240421  193942 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 22:58:26.240434  193942 cache.go:58] Caching tarball of preloaded images
	I1008 22:58:26.240522  193942 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 22:58:26.240537  193942 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 22:58:26.240637  193942 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 22:58:26.240661  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json: {Name:mkabb98c8b8938b0afd74c24337d3cb6e526a1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:26.240805  193942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 22:58:26.262793  193942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 22:58:26.262821  193942 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 22:58:26.262847  193942 cache.go:232] Successfully downloaded all kic artifacts
	I1008 22:58:26.262870  193942 start.go:360] acquireMachinesLock for default-k8s-diff-port-779490: {Name:mkf9138008d7ef2884518c448a03b33b088d9068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 22:58:26.262995  193942 start.go:364] duration metric: took 103.862µs to acquireMachinesLock for "default-k8s-diff-port-779490"
	I1008 22:58:26.263025  193942 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:58:26.263090  193942 start.go:125] createHost starting for "" (driver="docker")
	I1008 22:58:22.293446  193267 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:58:22.293805  193267 start.go:159] libmachine.API.Create for "embed-certs-825429" (driver="docker")
	I1008 22:58:22.293865  193267 client.go:168] LocalClient.Create starting
	I1008 22:58:22.293945  193267 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:58:22.293984  193267 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:22.294006  193267 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:22.294072  193267 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:58:22.294098  193267 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:22.294113  193267 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:22.294473  193267 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:58:22.310790  193267 cli_runner.go:211] docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:58:22.310860  193267 network_create.go:284] running [docker network inspect embed-certs-825429] to gather additional debugging logs...
	I1008 22:58:22.310884  193267 cli_runner.go:164] Run: docker network inspect embed-certs-825429
	W1008 22:58:22.330132  193267 cli_runner.go:211] docker network inspect embed-certs-825429 returned with exit code 1
	I1008 22:58:22.330168  193267 network_create.go:287] error running [docker network inspect embed-certs-825429]: docker network inspect embed-certs-825429: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-825429 not found
	I1008 22:58:22.330190  193267 network_create.go:289] output of [docker network inspect embed-certs-825429]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-825429 not found
	
	** /stderr **
	I1008 22:58:22.330270  193267 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:22.354321  193267 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:58:22.354995  193267 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:58:22.355573  193267 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:58:22.356165  193267 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cf220}
	I1008 22:58:22.356187  193267 network_create.go:124] attempt to create docker network embed-certs-825429 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1008 22:58:22.356321  193267 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-825429 embed-certs-825429
	I1008 22:58:22.431314  193267 network_create.go:108] docker network embed-certs-825429 192.168.76.0/24 created
	I1008 22:58:22.431349  193267 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-825429" container
	I1008 22:58:22.431421  193267 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:58:22.454085  193267 cli_runner.go:164] Run: docker volume create embed-certs-825429 --label name.minikube.sigs.k8s.io=embed-certs-825429 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:58:22.487590  193267 oci.go:103] Successfully created a docker volume embed-certs-825429
	I1008 22:58:22.487682  193267 cli_runner.go:164] Run: docker run --rm --name embed-certs-825429-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-825429 --entrypoint /usr/bin/test -v embed-certs-825429:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:58:23.122793  193267 oci.go:107] Successfully prepared a docker volume embed-certs-825429
	I1008 22:58:23.122854  193267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:23.122874  193267 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:58:23.122946  193267 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-825429:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:58:26.265952  193942 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 22:58:26.266184  193942 start.go:159] libmachine.API.Create for "default-k8s-diff-port-779490" (driver="docker")
	I1008 22:58:26.266218  193942 client.go:168] LocalClient.Create starting
	I1008 22:58:26.266282  193942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 22:58:26.266315  193942 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:26.266328  193942 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:26.266381  193942 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 22:58:26.266401  193942 main.go:141] libmachine: Decoding PEM data...
	I1008 22:58:26.266410  193942 main.go:141] libmachine: Parsing certificate...
	I1008 22:58:26.266883  193942 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 22:58:26.283531  193942 cli_runner.go:211] docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 22:58:26.283612  193942 network_create.go:284] running [docker network inspect default-k8s-diff-port-779490] to gather additional debugging logs...
	I1008 22:58:26.283629  193942 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490
	W1008 22:58:26.300290  193942 cli_runner.go:211] docker network inspect default-k8s-diff-port-779490 returned with exit code 1
	I1008 22:58:26.300318  193942 network_create.go:287] error running [docker network inspect default-k8s-diff-port-779490]: docker network inspect default-k8s-diff-port-779490: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-779490 not found
	I1008 22:58:26.300339  193942 network_create.go:289] output of [docker network inspect default-k8s-diff-port-779490]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-779490 not found
	
	** /stderr **
	I1008 22:58:26.300441  193942 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:26.316692  193942 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 22:58:26.317109  193942 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 22:58:26.317401  193942 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 22:58:26.317722  193942 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c72f626705cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:c4:86:26:e3:9b} reservation:<nil>}
	I1008 22:58:26.318168  193942 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b83e0}
	I1008 22:58:26.318194  193942 network_create.go:124] attempt to create docker network default-k8s-diff-port-779490 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 22:58:26.318252  193942 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 default-k8s-diff-port-779490
	I1008 22:58:26.394325  193942 network_create.go:108] docker network default-k8s-diff-port-779490 192.168.85.0/24 created
	I1008 22:58:26.394356  193942 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-779490" container
	I1008 22:58:26.394443  193942 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 22:58:26.410393  193942 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-779490 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --label created_by.minikube.sigs.k8s.io=true
	I1008 22:58:26.429093  193942 oci.go:103] Successfully created a docker volume default-k8s-diff-port-779490
	I1008 22:58:26.429189  193942 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-779490-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --entrypoint /usr/bin/test -v default-k8s-diff-port-779490:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 22:58:27.814315  193942 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-779490-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --entrypoint /usr/bin/test -v default-k8s-diff-port-779490:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.385071285s)
	I1008 22:58:27.814341  193942 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-779490
	I1008 22:58:27.814364  193942 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:27.814383  193942 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 22:58:27.814448  193942 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-779490:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 22:58:27.118536  193267 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-825429:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (3.995551652s)
	I1008 22:58:27.118570  193267 kic.go:203] duration metric: took 3.995693407s to extract preloaded images to volume ...
	W1008 22:58:27.118742  193267 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:58:27.118882  193267 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:58:27.253113  193267 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-825429 --name embed-certs-825429 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-825429 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-825429 --network embed-certs-825429 --ip 192.168.76.2 --volume embed-certs-825429:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:58:27.673367  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Running}}
	I1008 22:58:27.712441  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:58:27.748089  193267 cli_runner.go:164] Run: docker exec embed-certs-825429 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:58:27.817037  193267 oci.go:144] the created container "embed-certs-825429" has a running status.
	I1008 22:58:27.817077  193267 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa...
	I1008 22:58:29.425227  193267 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:58:29.462916  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:58:29.497401  193267 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:58:29.497421  193267 kic_runner.go:114] Args: [docker exec --privileged embed-certs-825429 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:58:29.563858  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:58:29.593829  193267 machine.go:93] provisionDockerMachine start ...
	I1008 22:58:29.593931  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:29.625609  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:29.626002  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:29.626021  193267 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:58:29.627369  193267 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 22:58:32.245853  193942 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-779490:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.431348104s)
	I1008 22:58:32.245882  193942 kic.go:203] duration metric: took 4.431497201s to extract preloaded images to volume ...
	W1008 22:58:32.246018  193942 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 22:58:32.246126  193942 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 22:58:32.349126  193942 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-779490 --name default-k8s-diff-port-779490 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-779490 --network default-k8s-diff-port-779490 --ip 192.168.85.2 --volume default-k8s-diff-port-779490:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 22:58:32.727550  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Running}}
	I1008 22:58:32.749832  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:58:32.779701  193942 cli_runner.go:164] Run: docker exec default-k8s-diff-port-779490 stat /var/lib/dpkg/alternatives/iptables
	I1008 22:58:32.840152  193942 oci.go:144] the created container "default-k8s-diff-port-779490" has a running status.
	I1008 22:58:32.840186  193942 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa...
	I1008 22:58:33.783624  193942 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 22:58:33.812895  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:58:33.834758  193942 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 22:58:33.834778  193942 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-779490 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 22:58:33.884178  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:58:33.905394  193942 machine.go:93] provisionDockerMachine start ...
	I1008 22:58:33.905487  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:33.932894  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:33.933220  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:33.933230  193942 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 22:58:33.933970  193942 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50822->127.0.0.1:33076: read: connection reset by peer
	I1008 22:58:32.853473  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 22:58:32.853500  193267 ubuntu.go:182] provisioning hostname "embed-certs-825429"
	I1008 22:58:32.853923  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:32.897941  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:32.898248  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:32.898260  193267 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825429 && echo "embed-certs-825429" | sudo tee /etc/hostname
	I1008 22:58:33.126908  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 22:58:33.126979  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:33.185854  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:33.186159  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:33.186175  193267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825429' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825429/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825429' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:58:33.383523  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:58:33.383546  193267 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:58:33.383566  193267 ubuntu.go:190] setting up certificates
	I1008 22:58:33.383591  193267 provision.go:84] configureAuth start
	I1008 22:58:33.383705  193267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 22:58:33.444214  193267 provision.go:143] copyHostCerts
	I1008 22:58:33.444283  193267 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:58:33.444297  193267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:58:33.444379  193267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:58:33.444482  193267 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:58:33.444493  193267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:58:33.444523  193267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:58:33.444582  193267 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:58:33.444589  193267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:58:33.444614  193267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:58:33.444668  193267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825429 san=[127.0.0.1 192.168.76.2 embed-certs-825429 localhost minikube]
	I1008 22:58:33.683095  193267 provision.go:177] copyRemoteCerts
	I1008 22:58:33.683161  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:58:33.683205  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:33.700502  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:33.802650  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:58:33.825364  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 22:58:33.849051  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 22:58:33.888124  193267 provision.go:87] duration metric: took 504.48759ms to configureAuth
	I1008 22:58:33.888153  193267 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:58:33.888322  193267 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:58:33.888423  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:33.909410  193267 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:33.909815  193267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33071 <nil> <nil>}
	I1008 22:58:33.909851  193267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:58:34.253423  193267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:58:34.253443  193267 machine.go:96] duration metric: took 4.659590061s to provisionDockerMachine
	I1008 22:58:34.253454  193267 client.go:171] duration metric: took 11.959578193s to LocalClient.Create
	I1008 22:58:34.253470  193267 start.go:167] duration metric: took 11.959666694s to libmachine.API.Create "embed-certs-825429"
	I1008 22:58:34.253477  193267 start.go:293] postStartSetup for "embed-certs-825429" (driver="docker")
	I1008 22:58:34.253486  193267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:58:34.253550  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:58:34.253591  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.271288  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.375370  193267 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:58:34.379078  193267 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:58:34.379148  193267 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:58:34.379173  193267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:58:34.379270  193267 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:58:34.379403  193267 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:58:34.379554  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:58:34.392792  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:34.423568  193267 start.go:296] duration metric: took 170.076155ms for postStartSetup
	I1008 22:58:34.424027  193267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 22:58:34.451266  193267 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/config.json ...
	I1008 22:58:34.451539  193267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:58:34.451579  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.475926  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.583740  193267 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:58:34.589471  193267 start.go:128] duration metric: took 12.299382125s to createHost
	I1008 22:58:34.589493  193267 start.go:83] releasing machines lock for "embed-certs-825429", held for 12.299512662s
	I1008 22:58:34.589577  193267 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 22:58:34.615135  193267 ssh_runner.go:195] Run: cat /version.json
	I1008 22:58:34.615183  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.615417  193267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:58:34.615470  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:58:34.660097  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.667777  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:58:34.769606  193267 ssh_runner.go:195] Run: systemctl --version
	I1008 22:58:34.876441  193267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:58:34.918473  193267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:58:34.923838  193267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:58:34.923956  193267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:58:34.967286  193267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:58:34.967364  193267 start.go:495] detecting cgroup driver to use...
	I1008 22:58:34.967422  193267 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:58:34.967508  193267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:58:34.985916  193267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:58:34.999098  193267 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:58:34.999162  193267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:58:35.020431  193267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:58:35.040581  193267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:58:35.160089  193267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:58:35.293732  193267 docker.go:234] disabling docker service ...
	I1008 22:58:35.293800  193267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:58:35.314372  193267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:58:35.327382  193267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:58:35.455914  193267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:58:35.569247  193267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:58:35.582570  193267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:58:35.597097  193267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:58:35.597229  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.606209  193267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:58:35.606341  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.615508  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.624771  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.633844  193267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:58:35.641842  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.650476  193267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.664476  193267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:35.673185  193267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:58:35.681467  193267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:58:35.688784  193267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:35.793775  193267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:58:35.911022  193267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:58:35.911105  193267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:58:35.914929  193267 start.go:563] Will wait 60s for crictl version
	I1008 22:58:35.914993  193267 ssh_runner.go:195] Run: which crictl
	I1008 22:58:35.918532  193267 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:58:35.946686  193267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:58:35.946780  193267 ssh_runner.go:195] Run: crio --version
	I1008 22:58:35.975839  193267 ssh_runner.go:195] Run: crio --version
	I1008 22:58:36.012699  193267 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:58:36.015745  193267 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:36.033049  193267 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 22:58:36.037086  193267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:36.047762  193267 kubeadm.go:883] updating cluster {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:58:36.047897  193267 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:36.047957  193267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:36.083068  193267 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:36.083095  193267 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:58:36.083188  193267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:36.109108  193267 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:36.109133  193267 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:58:36.109143  193267 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 22:58:36.109240  193267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-825429 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:58:36.109331  193267 ssh_runner.go:195] Run: crio config
	I1008 22:58:36.193225  193267 cni.go:84] Creating CNI manager for ""
	I1008 22:58:36.193258  193267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:36.193289  193267 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:58:36.193334  193267 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825429 NodeName:embed-certs-825429 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:58:36.193550  193267 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825429"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:58:36.193691  193267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:58:36.203082  193267 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:58:36.203191  193267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:58:36.211063  193267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1008 22:58:36.224482  193267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:58:36.239799  193267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1008 22:58:36.253374  193267 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:58:36.257259  193267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:36.268856  193267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:36.384703  193267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:58:36.403325  193267 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429 for IP: 192.168.76.2
	I1008 22:58:36.403349  193267 certs.go:195] generating shared ca certs ...
	I1008 22:58:36.403365  193267 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:36.403538  193267 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:58:36.403603  193267 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:58:36.403617  193267 certs.go:257] generating profile certs ...
	I1008 22:58:36.403701  193267 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key
	I1008 22:58:36.403719  193267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.crt with IP's: []
	I1008 22:58:37.061191  193267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.crt ...
	I1008 22:58:37.061224  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.crt: {Name:mkdf8e21f9059b7b8a2cb821778833bc60d65743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.061455  193267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key ...
	I1008 22:58:37.061471  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key: {Name:mkfa72764401323eced50bfab5c424645f2285c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.061602  193267 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3
	I1008 22:58:37.061623  193267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1008 22:58:37.640056  193267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3 ...
	I1008 22:58:37.640086  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3: {Name:mk639d77fd638bc7cf2bfdd5b5da4ff52e78a8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.640343  193267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3 ...
	I1008 22:58:37.640359  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3: {Name:mkd84f206e09830d5522cca9aeb26202b3227cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:37.640488  193267 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt.6dc562e3 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt
	I1008 22:58:37.640611  193267 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key
	I1008 22:58:37.640707  193267 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key
	I1008 22:58:37.640750  193267 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt with IP's: []
	I1008 22:58:38.246731  193267 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt ...
	I1008 22:58:38.246804  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt: {Name:mk28574d9b2c45516767271026e48f4821fd4994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:38.247050  193267 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key ...
	I1008 22:58:38.247085  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key: {Name:mk1f3d84dcb2ff752724190c701ad4391a99be75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:38.247345  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:58:38.247411  193267 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:58:38.247439  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:58:38.247503  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:58:38.247565  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:58:38.247649  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:58:38.247727  193267 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:38.248354  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:58:38.273018  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:58:38.294248  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:58:38.314440  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:58:38.335711  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 22:58:38.354730  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 22:58:38.376418  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:58:38.398265  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 22:58:38.417612  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:58:38.438812  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:58:38.459790  193267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:58:38.483561  193267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:58:38.502880  193267 ssh_runner.go:195] Run: openssl version
	I1008 22:58:38.510878  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:58:38.532793  193267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:58:38.541137  193267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:58:38.541199  193267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:58:38.590261  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:58:38.599831  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:58:38.610627  193267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:38.615087  193267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:38.615153  193267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:38.675167  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:58:38.686298  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:58:38.696623  193267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:58:38.701146  193267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:58:38.701220  193267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:58:38.749931  193267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:58:38.758877  193267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:58:38.763789  193267 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:58:38.763837  193267 kubeadm.go:400] StartCluster: {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:58:38.763905  193267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:58:38.763970  193267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:58:38.795549  193267 cri.go:89] found id: ""
	I1008 22:58:38.795625  193267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:58:38.805443  193267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:58:38.813573  193267 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:58:38.813701  193267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:58:38.821386  193267 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:58:38.821410  193267 kubeadm.go:157] found existing configuration files:
	
	I1008 22:58:38.821469  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 22:58:38.829768  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:58:38.829845  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:58:38.837896  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 22:58:38.847236  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:58:38.847295  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:58:38.859690  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 22:58:38.870044  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:58:38.870105  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:58:38.882749  193267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 22:58:38.893303  193267 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:58:38.893365  193267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:58:38.903901  193267 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:58:38.977974  193267 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:58:38.978345  193267 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:58:39.006429  193267 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:58:39.006781  193267 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:58:39.006833  193267 kubeadm.go:318] OS: Linux
	I1008 22:58:39.006887  193267 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:58:39.006941  193267 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:58:39.006995  193267 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:58:39.007049  193267 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:58:39.007115  193267 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:58:39.007170  193267 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:58:39.007221  193267 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:58:39.007276  193267 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:58:39.007329  193267 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:58:39.090608  193267 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:58:39.090723  193267 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:58:39.090818  193267 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:58:39.103102  193267 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:58:37.085738  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 22:58:37.085760  193942 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-779490"
	I1008 22:58:37.085819  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:37.125207  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:37.125516  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:37.125529  193942 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-779490 && echo "default-k8s-diff-port-779490" | sudo tee /etc/hostname
	I1008 22:58:37.304747  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 22:58:37.304882  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:37.327885  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:37.328194  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:37.328215  193942 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-779490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-779490/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-779490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 22:58:37.478511  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 22:58:37.478587  193942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 22:58:37.478622  193942 ubuntu.go:190] setting up certificates
	I1008 22:58:37.478659  193942 provision.go:84] configureAuth start
	I1008 22:58:37.478761  193942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 22:58:37.501938  193942 provision.go:143] copyHostCerts
	I1008 22:58:37.502003  193942 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 22:58:37.502013  193942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 22:58:37.502088  193942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 22:58:37.502183  193942 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 22:58:37.502190  193942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 22:58:37.502222  193942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 22:58:37.502278  193942 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 22:58:37.502283  193942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 22:58:37.502307  193942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 22:58:37.502353  193942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-779490 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-779490 localhost minikube]
	I1008 22:58:37.996969  193942 provision.go:177] copyRemoteCerts
	I1008 22:58:37.997082  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 22:58:37.997150  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.025747  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.138270  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 22:58:38.158442  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 22:58:38.178797  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 22:58:38.198611  193942 provision.go:87] duration metric: took 719.911754ms to configureAuth
	I1008 22:58:38.198638  193942 ubuntu.go:206] setting minikube options for container-runtime
	I1008 22:58:38.198819  193942 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:58:38.198927  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.218397  193942 main.go:141] libmachine: Using SSH client type: native
	I1008 22:58:38.218703  193942 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33076 <nil> <nil>}
	I1008 22:58:38.218724  193942 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 22:58:38.515693  193942 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 22:58:38.515755  193942 machine.go:96] duration metric: took 4.610341316s to provisionDockerMachine
	I1008 22:58:38.515789  193942 client.go:171] duration metric: took 12.249555066s to LocalClient.Create
	I1008 22:58:38.515836  193942 start.go:167] duration metric: took 12.249652995s to libmachine.API.Create "default-k8s-diff-port-779490"
	I1008 22:58:38.515866  193942 start.go:293] postStartSetup for "default-k8s-diff-port-779490" (driver="docker")
	I1008 22:58:38.515893  193942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 22:58:38.515993  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 22:58:38.516063  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.540990  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.647743  193942 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 22:58:38.651610  193942 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 22:58:38.651635  193942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 22:58:38.651645  193942 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 22:58:38.651697  193942 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 22:58:38.651779  193942 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 22:58:38.651880  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 22:58:38.661802  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:38.686000  193942 start.go:296] duration metric: took 170.105406ms for postStartSetup
	I1008 22:58:38.686470  193942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 22:58:38.707844  193942 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 22:58:38.708114  193942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:58:38.708173  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.731226  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.831933  193942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 22:58:38.837460  193942 start.go:128] duration metric: took 12.574356156s to createHost
	I1008 22:58:38.837487  193942 start.go:83] releasing machines lock for "default-k8s-diff-port-779490", held for 12.574479727s
	I1008 22:58:38.837558  193942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 22:58:38.863802  193942 ssh_runner.go:195] Run: cat /version.json
	I1008 22:58:38.863850  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.864085  193942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 22:58:38.864137  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:58:38.900231  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:38.913384  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:58:39.014810  193942 ssh_runner.go:195] Run: systemctl --version
	I1008 22:58:39.112380  193942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 22:58:39.168303  193942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 22:58:39.174524  193942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 22:58:39.174695  193942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 22:58:39.206521  193942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 22:58:39.206545  193942 start.go:495] detecting cgroup driver to use...
	I1008 22:58:39.206615  193942 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 22:58:39.206700  193942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 22:58:39.226905  193942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 22:58:39.241766  193942 docker.go:218] disabling cri-docker service (if available) ...
	I1008 22:58:39.241915  193942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 22:58:39.260899  193942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 22:58:39.283060  193942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 22:58:39.495447  193942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 22:58:39.673775  193942 docker.go:234] disabling docker service ...
	I1008 22:58:39.673852  193942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 22:58:39.698825  193942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 22:58:39.713795  193942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 22:58:39.856905  193942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 22:58:40.007009  193942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 22:58:40.024999  193942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 22:58:40.044193  193942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 22:58:40.044271  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.054527  193942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 22:58:40.054609  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.064733  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.075759  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.086551  193942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 22:58:40.098332  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.112746  193942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.129109  193942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 22:58:40.139485  193942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 22:58:40.148411  193942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 22:58:40.157545  193942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:40.300738  193942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 22:58:40.485242  193942 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 22:58:40.485411  193942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 22:58:40.490402  193942 start.go:563] Will wait 60s for crictl version
	I1008 22:58:40.490522  193942 ssh_runner.go:195] Run: which crictl
	I1008 22:58:40.494619  193942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 22:58:40.520950  193942 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 22:58:40.521129  193942 ssh_runner.go:195] Run: crio --version
	I1008 22:58:40.552869  193942 ssh_runner.go:195] Run: crio --version
	I1008 22:58:40.599308  193942 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 22:58:40.602116  193942 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 22:58:40.622876  193942 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 22:58:40.628292  193942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:40.641026  193942 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 22:58:40.641152  193942 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 22:58:40.641225  193942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:40.700347  193942 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:40.700374  193942 crio.go:433] Images already preloaded, skipping extraction
	I1008 22:58:40.700452  193942 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 22:58:40.734215  193942 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 22:58:40.734254  193942 cache_images.go:85] Images are preloaded, skipping loading
	I1008 22:58:40.734263  193942 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1008 22:58:40.734360  193942 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-779490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 22:58:40.734491  193942 ssh_runner.go:195] Run: crio config
	I1008 22:58:40.817431  193942 cni.go:84] Creating CNI manager for ""
	I1008 22:58:40.817465  193942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:40.817486  193942 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 22:58:40.817508  193942 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-779490 NodeName:default-k8s-diff-port-779490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 22:58:40.817678  193942 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-779490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 22:58:40.817764  193942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 22:58:40.826337  193942 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 22:58:40.826414  193942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 22:58:40.835791  193942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1008 22:58:40.852793  193942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 22:58:40.866921  193942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1008 22:58:40.881794  193942 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 22:58:40.885818  193942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 22:58:40.895637  193942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:58:39.109546  193267 out.go:252]   - Generating certificates and keys ...
	I1008 22:58:39.109669  193267 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:58:39.109738  193267 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:58:39.281483  193267 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:58:39.646542  193267 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:58:40.040744  193267 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:58:40.635786  193267 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:58:41.761977  193267 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:58:41.762120  193267 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-825429 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:58:41.052887  193942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:58:41.069143  193942 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490 for IP: 192.168.85.2
	I1008 22:58:41.069168  193942 certs.go:195] generating shared ca certs ...
	I1008 22:58:41.069184  193942 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.069349  193942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 22:58:41.069407  193942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 22:58:41.069420  193942 certs.go:257] generating profile certs ...
	I1008 22:58:41.069492  193942 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key
	I1008 22:58:41.069524  193942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt with IP's: []
	I1008 22:58:41.610318  193942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt ...
	I1008 22:58:41.610352  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: {Name:mk6078106987510267b1b0e1a9d7470df5ff04d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.610543  193942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key ...
	I1008 22:58:41.610559  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key: {Name:mk0f7a7c762f34bcab92d826adf9dc16432a6764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.610650  193942 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765
	I1008 22:58:41.610668  193942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1008 22:58:41.820500  193942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765 ...
	I1008 22:58:41.820532  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765: {Name:mk3902d08d3c330d1c0272056ea7bfdcd8d45f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.820736  193942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765 ...
	I1008 22:58:41.820751  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765: {Name:mk69a7b7e64ea74a698b74781a61b3846d80a8e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:41.820843  193942 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt.e9b65765 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt
	I1008 22:58:41.820927  193942 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key
	I1008 22:58:41.820990  193942 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key
	I1008 22:58:41.821008  193942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt with IP's: []
	I1008 22:58:42.407452  193942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt ...
	I1008 22:58:42.407535  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt: {Name:mka0acd8e40bb16d49f151b2b541fd8cbfc63c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:42.407820  193942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key ...
	I1008 22:58:42.407864  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key: {Name:mk8ccb71f5cb93a8a35fd14a12573f4c958bdc51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:58:42.408191  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 22:58:42.408283  193942 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 22:58:42.408342  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 22:58:42.408398  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 22:58:42.408482  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 22:58:42.408540  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 22:58:42.408636  193942 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 22:58:42.409358  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 22:58:42.431160  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 22:58:42.451959  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 22:58:42.472746  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 22:58:42.494274  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 22:58:42.514679  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 22:58:42.534492  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 22:58:42.554936  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 22:58:42.576065  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 22:58:42.611063  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 22:58:42.682727  193942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 22:58:42.706413  193942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 22:58:42.721569  193942 ssh_runner.go:195] Run: openssl version
	I1008 22:58:42.728768  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 22:58:42.738781  193942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:42.743540  193942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:42.743611  193942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 22:58:42.785536  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 22:58:42.795377  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 22:58:42.804765  193942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 22:58:42.809669  193942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 22:58:42.809746  193942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 22:58:42.852259  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 22:58:42.862400  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 22:58:42.872402  193942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 22:58:42.877957  193942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 22:58:42.878032  193942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 22:58:42.925394  193942 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 22:58:42.935052  193942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 22:58:42.940324  193942 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 22:58:42.940393  193942 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:58:42.940481  193942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 22:58:42.940556  193942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 22:58:42.976059  193942 cri.go:89] found id: ""
	I1008 22:58:42.976139  193942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 22:58:42.986937  193942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 22:58:42.995598  193942 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 22:58:42.995670  193942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 22:58:43.008167  193942 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 22:58:43.008189  193942 kubeadm.go:157] found existing configuration files:
	
	I1008 22:58:43.008254  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1008 22:58:43.018575  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 22:58:43.018673  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 22:58:43.027633  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1008 22:58:43.037598  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 22:58:43.037691  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 22:58:43.046368  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1008 22:58:43.056394  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 22:58:43.056465  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 22:58:43.065105  193942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1008 22:58:43.076000  193942 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 22:58:43.076077  193942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 22:58:43.084901  193942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 22:58:43.150179  193942 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 22:58:43.150600  193942 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 22:58:43.181801  193942 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 22:58:43.181897  193942 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 22:58:43.181951  193942 kubeadm.go:318] OS: Linux
	I1008 22:58:43.182016  193942 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 22:58:43.182088  193942 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 22:58:43.182155  193942 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 22:58:43.182210  193942 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 22:58:43.182280  193942 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 22:58:43.182346  193942 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 22:58:43.182409  193942 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 22:58:43.182473  193942 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 22:58:43.182539  193942 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 22:58:43.262230  193942 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 22:58:43.262357  193942 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 22:58:43.262473  193942 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 22:58:43.274179  193942 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 22:58:41.998064  193267 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:58:41.998223  193267 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-825429 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 22:58:42.582024  193267 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:58:42.856851  193267 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:58:43.453984  193267 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:58:43.454061  193267 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:58:44.013001  193267 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:58:44.315058  193267 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:58:44.434679  193267 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:58:44.866116  193267 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:58:45.173418  193267 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:58:45.174707  193267 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:58:45.178193  193267 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:58:43.279957  193942 out.go:252]   - Generating certificates and keys ...
	I1008 22:58:43.280071  193942 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 22:58:43.280149  193942 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 22:58:44.738056  193942 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 22:58:45.105959  193942 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 22:58:45.420866  193942 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 22:58:45.568191  193942 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 22:58:45.851033  193942 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 22:58:45.851636  193942 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-779490 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:58:45.181934  193267 out.go:252]   - Booting up control plane ...
	I1008 22:58:45.182055  193267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:58:45.183988  193267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:58:45.186213  193267 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:58:45.210268  193267 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:58:45.210388  193267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:58:45.221375  193267 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:58:45.221487  193267 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:58:45.221534  193267 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:58:45.420398  193267 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:58:45.420527  193267 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:58:46.921719  193267 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501644492s
	I1008 22:58:46.925300  193267 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:58:46.925401  193267 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1008 22:58:46.925647  193267 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:58:46.925739  193267 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:58:46.220524  193942 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 22:58:46.221139  193942 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-779490 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 22:58:46.459977  193942 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 22:58:47.214226  193942 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 22:58:48.096575  193942 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 22:58:48.096658  193942 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 22:58:48.517972  193942 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 22:58:49.076031  193942 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 22:58:50.064592  193942 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 22:58:50.098686  193942 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 22:58:50.921550  193942 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 22:58:50.922246  193942 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 22:58:50.924991  193942 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 22:58:50.928520  193942 out.go:252]   - Booting up control plane ...
	I1008 22:58:50.928633  193942 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 22:58:50.928718  193942 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 22:58:50.930110  193942 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 22:58:50.955071  193942 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 22:58:50.955182  193942 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 22:58:50.962921  193942 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 22:58:50.963256  193942 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 22:58:50.963304  193942 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 22:58:51.176857  193942 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 22:58:51.181205  193942 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 22:58:52.188911  193942 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003751893s
	I1008 22:58:52.189023  193942 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 22:58:52.189107  193942 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1008 22:58:52.189199  193942 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 22:58:52.189281  193942 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 22:58:52.415761  193267 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.48961388s
	I1008 22:58:54.595484  193267 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.670172572s
	I1008 22:58:56.427676  193267 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.502045243s
	I1008 22:58:56.460024  193267 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 22:58:56.485124  193267 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 22:58:56.504848  193267 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 22:58:56.505328  193267 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-825429 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 22:58:56.549005  193267 kubeadm.go:318] [bootstrap-token] Using token: 7u8re5.dkmizverazog8if9
	I1008 22:58:56.552047  193267 out.go:252]   - Configuring RBAC rules ...
	I1008 22:58:56.552175  193267 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 22:58:56.596777  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 22:58:56.610180  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 22:58:56.623090  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 22:58:56.633877  193267 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 22:58:56.640676  193267 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 22:58:56.840695  193267 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 22:58:57.499787  193267 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 22:58:57.841419  193267 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 22:58:57.842769  193267 kubeadm.go:318] 
	I1008 22:58:57.842848  193267 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 22:58:57.842853  193267 kubeadm.go:318] 
	I1008 22:58:57.842934  193267 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 22:58:57.842939  193267 kubeadm.go:318] 
	I1008 22:58:57.842965  193267 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 22:58:57.843027  193267 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 22:58:57.843081  193267 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 22:58:57.843085  193267 kubeadm.go:318] 
	I1008 22:58:57.843142  193267 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 22:58:57.843146  193267 kubeadm.go:318] 
	I1008 22:58:57.843196  193267 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 22:58:57.843207  193267 kubeadm.go:318] 
	I1008 22:58:57.843262  193267 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 22:58:57.843340  193267 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 22:58:57.843418  193267 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 22:58:57.843423  193267 kubeadm.go:318] 
	I1008 22:58:57.843511  193267 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 22:58:57.843591  193267 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 22:58:57.843595  193267 kubeadm.go:318] 
	I1008 22:58:57.843683  193267 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7u8re5.dkmizverazog8if9 \
	I1008 22:58:57.843803  193267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 22:58:57.843835  193267 kubeadm.go:318] 	--control-plane 
	I1008 22:58:57.843841  193267 kubeadm.go:318] 
	I1008 22:58:57.844246  193267 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 22:58:57.844258  193267 kubeadm.go:318] 
	I1008 22:58:57.844344  193267 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7u8re5.dkmizverazog8if9 \
	I1008 22:58:57.844454  193267 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 22:58:57.854425  193267 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:58:57.854766  193267 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:58:57.854928  193267 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:58:57.854971  193267 cni.go:84] Creating CNI manager for ""
	I1008 22:58:57.854994  193267 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:58:57.860300  193267 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 22:58:59.711770  193942 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.523059433s
	I1008 22:59:00.214966  193942 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.02657341s
	I1008 22:58:57.863226  193267 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 22:58:57.868227  193267 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 22:58:57.868257  193267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 22:58:57.923152  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 22:58:58.468793  193267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 22:58:58.468920  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:58:58.469003  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-825429 minikube.k8s.io/updated_at=2025_10_08T22_58_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=embed-certs-825429 minikube.k8s.io/primary=true
	I1008 22:58:58.986172  193267 ops.go:34] apiserver oom_adj: -16
	I1008 22:58:58.986274  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:58:59.486838  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:58:59.987175  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:00.486918  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:00.987120  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:01.487194  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:01.986546  193267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:02.109491  193267 kubeadm.go:1113] duration metric: took 3.640615382s to wait for elevateKubeSystemPrivileges
	I1008 22:59:02.109520  193267 kubeadm.go:402] duration metric: took 23.3456883s to StartCluster
	I1008 22:59:02.109539  193267 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:02.109603  193267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:59:02.110717  193267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:02.110960  193267 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:59:02.111074  193267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 22:59:02.111305  193267 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:59:02.111342  193267 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:59:02.111406  193267 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825429"
	I1008 22:59:02.111420  193267 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-825429"
	I1008 22:59:02.111441  193267 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 22:59:02.111993  193267 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825429"
	I1008 22:59:02.112018  193267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825429"
	I1008 22:59:02.112342  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:59:02.112559  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:59:02.114298  193267 out.go:179] * Verifying Kubernetes components...
	I1008 22:59:02.117453  193267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:59:02.159648  193267 addons.go:238] Setting addon default-storageclass=true in "embed-certs-825429"
	I1008 22:59:02.159690  193267 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 22:59:02.160127  193267 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 22:59:02.175987  193267 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:59:02.190192  193942 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.001664249s
	I1008 22:59:02.246484  193942 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 22:59:02.266160  193942 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 22:59:02.283295  193942 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 22:59:02.283854  193942 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-779490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 22:59:02.300647  193942 kubeadm.go:318] [bootstrap-token] Using token: gg0985.x9u9zh7hb4308wrl
	I1008 22:59:02.303761  193942 out.go:252]   - Configuring RBAC rules ...
	I1008 22:59:02.303892  193942 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 22:59:02.311276  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 22:59:02.326189  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 22:59:02.331151  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 22:59:02.335589  193942 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 22:59:02.340215  193942 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 22:59:02.599157  193942 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 22:59:03.117499  193942 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 22:59:03.604170  193942 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 22:59:03.605203  193942 kubeadm.go:318] 
	I1008 22:59:03.605274  193942 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 22:59:03.605280  193942 kubeadm.go:318] 
	I1008 22:59:03.605367  193942 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 22:59:03.605373  193942 kubeadm.go:318] 
	I1008 22:59:03.605399  193942 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 22:59:03.605460  193942 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 22:59:03.605514  193942 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 22:59:03.605519  193942 kubeadm.go:318] 
	I1008 22:59:03.605575  193942 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 22:59:03.605579  193942 kubeadm.go:318] 
	I1008 22:59:03.605645  193942 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 22:59:03.605652  193942 kubeadm.go:318] 
	I1008 22:59:03.605707  193942 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 22:59:03.605784  193942 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 22:59:03.605864  193942 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 22:59:03.605869  193942 kubeadm.go:318] 
	I1008 22:59:03.605957  193942 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 22:59:03.606036  193942 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 22:59:03.606041  193942 kubeadm.go:318] 
	I1008 22:59:03.606132  193942 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token gg0985.x9u9zh7hb4308wrl \
	I1008 22:59:03.606240  193942 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 22:59:03.606261  193942 kubeadm.go:318] 	--control-plane 
	I1008 22:59:03.606266  193942 kubeadm.go:318] 
	I1008 22:59:03.606354  193942 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 22:59:03.606358  193942 kubeadm.go:318] 
	I1008 22:59:03.606711  193942 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token gg0985.x9u9zh7hb4308wrl \
	I1008 22:59:03.606824  193942 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 22:59:03.620714  193942 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 22:59:03.620977  193942 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 22:59:03.621094  193942 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 22:59:03.621178  193942 cni.go:84] Creating CNI manager for ""
	I1008 22:59:03.621189  193942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 22:59:03.624414  193942 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 22:59:02.181880  193267 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:02.181909  193267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:59:02.181979  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:59:02.200394  193267 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:02.200416  193267 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:59:02.200484  193267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 22:59:02.237298  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:59:02.248302  193267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33071 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 22:59:02.494274  193267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 22:59:02.532363  193267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:59:02.573289  193267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:02.639027  193267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:03.593401  193267 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.099087689s)
	I1008 22:59:03.593438  193267 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1008 22:59:03.593769  193267 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.061370571s)
	I1008 22:59:03.594652  193267 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825429" to be "Ready" ...
	I1008 22:59:04.067779  193267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.494449541s)
	I1008 22:59:04.067853  193267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.42878533s)
	I1008 22:59:04.106440  193267 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 22:59:03.627845  193942 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 22:59:03.632743  193942 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 22:59:03.632762  193942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 22:59:03.652673  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 22:59:04.158896  193942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 22:59:04.159031  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:04.159103  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-779490 minikube.k8s.io/updated_at=2025_10_08T22_59_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=default-k8s-diff-port-779490 minikube.k8s.io/primary=true
	I1008 22:59:04.363987  193942 ops.go:34] apiserver oom_adj: -16
	I1008 22:59:04.364093  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:04.864801  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:05.364615  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:05.864352  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:04.109468  193267 addons.go:514] duration metric: took 1.998089049s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 22:59:04.111508  193267 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-825429" context rescaled to 1 replicas
	W1008 22:59:05.597374  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	I1008 22:59:06.364695  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:06.864463  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:07.364932  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:07.864697  193942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 22:59:08.000229  193942 kubeadm.go:1113] duration metric: took 3.841240748s to wait for elevateKubeSystemPrivileges
	I1008 22:59:08.000266  193942 kubeadm.go:402] duration metric: took 25.059877981s to StartCluster
	I1008 22:59:08.000284  193942 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:08.000348  193942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:59:08.002240  193942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 22:59:08.002577  193942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 22:59:08.003101  193942 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:59:08.003187  193942 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 22:59:08.003220  193942 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 22:59:08.003647  193942 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-779490"
	I1008 22:59:08.003668  193942 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-779490"
	I1008 22:59:08.003694  193942 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 22:59:08.004032  193942 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-779490"
	I1008 22:59:08.004099  193942 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-779490"
	I1008 22:59:08.004222  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:59:08.004505  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:59:08.007530  193942 out.go:179] * Verifying Kubernetes components...
	I1008 22:59:08.012642  193942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 22:59:08.049795  193942 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-779490"
	I1008 22:59:08.049839  193942 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 22:59:08.050011  193942 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 22:59:08.050276  193942 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 22:59:08.053726  193942 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:08.053752  193942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 22:59:08.053819  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:59:08.082215  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:59:08.088803  193942 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:08.088824  193942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 22:59:08.088892  193942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 22:59:08.125403  193942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33076 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 22:59:08.271936  193942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 22:59:08.338609  193942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 22:59:08.339590  193942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 22:59:08.364911  193942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 22:59:09.182616  193942 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1008 22:59:09.184995  193942 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 22:59:09.238888  193942 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 22:59:09.242039  193942 addons.go:514] duration metric: took 1.238799643s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 22:59:09.686761  193942 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-779490" context rescaled to 1 replicas
	W1008 22:59:07.598451  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:10.097872  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:11.187947  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:13.188676  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:15.188799  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:12.098534  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:14.598137  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:17.688134  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:19.688529  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:17.097964  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:19.098348  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:21.597786  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:21.689711  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:24.188693  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:24.098301  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:26.598323  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:26.688435  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:28.688525  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:30.688650  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:29.098120  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:31.098760  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:33.188474  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:35.688011  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:33.598283  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:36.097755  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:37.688535  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:40.188547  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:38.597980  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:41.097407  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	W1008 22:59:42.189118  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:44.688444  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:43.098653  193267 node_ready.go:57] node "embed-certs-825429" has "Ready":"False" status (will retry)
	I1008 22:59:45.597837  193267 node_ready.go:49] node "embed-certs-825429" is "Ready"
	I1008 22:59:45.597867  193267 node_ready.go:38] duration metric: took 42.003157205s for node "embed-certs-825429" to be "Ready" ...
	I1008 22:59:45.597881  193267 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:59:45.597975  193267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:59:45.610225  193267 api_server.go:72] duration metric: took 43.49922909s to wait for apiserver process to appear ...
	I1008 22:59:45.610251  193267 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:59:45.610270  193267 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 22:59:45.619170  193267 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 22:59:45.620217  193267 api_server.go:141] control plane version: v1.34.1
	I1008 22:59:45.620240  193267 api_server.go:131] duration metric: took 9.981516ms to wait for apiserver health ...
	I1008 22:59:45.620250  193267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:59:45.623500  193267 system_pods.go:59] 8 kube-system pods found
	I1008 22:59:45.623540  193267 system_pods.go:61] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:45.623547  193267 system_pods.go:61] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:45.623553  193267 system_pods.go:61] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:45.623559  193267 system_pods.go:61] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:45.623564  193267 system_pods.go:61] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:45.623568  193267 system_pods.go:61] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:45.623573  193267 system_pods.go:61] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:45.623583  193267 system_pods.go:61] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:45.623589  193267 system_pods.go:74] duration metric: took 3.333339ms to wait for pod list to return data ...
	I1008 22:59:45.623602  193267 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:59:45.626757  193267 default_sa.go:45] found service account: "default"
	I1008 22:59:45.626791  193267 default_sa.go:55] duration metric: took 3.1741ms for default service account to be created ...
	I1008 22:59:45.626802  193267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:59:45.630100  193267 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:45.630134  193267 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:45.630141  193267 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:45.630148  193267 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:45.630152  193267 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:45.630157  193267 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:45.630163  193267 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:45.630174  193267 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:45.630180  193267 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:45.630213  193267 retry.go:31] will retry after 253.798696ms: missing components: kube-dns
	I1008 22:59:45.916124  193267 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:45.916161  193267 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:45.916169  193267 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:45.916175  193267 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:45.916179  193267 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:45.916184  193267 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:45.916188  193267 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:45.916193  193267 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:45.916202  193267 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:45.916218  193267 retry.go:31] will retry after 290.004825ms: missing components: kube-dns
	I1008 22:59:46.211080  193267 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:46.211119  193267 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:46.211127  193267 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 22:59:46.211133  193267 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 22:59:46.211138  193267 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running
	I1008 22:59:46.211143  193267 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running
	I1008 22:59:46.211147  193267 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 22:59:46.211151  193267 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running
	I1008 22:59:46.211165  193267 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 22:59:46.211177  193267 system_pods.go:126] duration metric: took 584.369269ms to wait for k8s-apps to be running ...
	I1008 22:59:46.211190  193267 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:59:46.211285  193267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:59:46.227658  193267 system_svc.go:56] duration metric: took 16.451548ms WaitForService to wait for kubelet
	I1008 22:59:46.227688  193267 kubeadm.go:586] duration metric: took 44.116696946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:59:46.227714  193267 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:59:46.231337  193267 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:59:46.231380  193267 node_conditions.go:123] node cpu capacity is 2
	I1008 22:59:46.231393  193267 node_conditions.go:105] duration metric: took 3.668227ms to run NodePressure ...
	I1008 22:59:46.231406  193267 start.go:241] waiting for startup goroutines ...
	I1008 22:59:46.231413  193267 start.go:246] waiting for cluster config update ...
	I1008 22:59:46.231424  193267 start.go:255] writing updated cluster config ...
	I1008 22:59:46.231716  193267 ssh_runner.go:195] Run: rm -f paused
	I1008 22:59:46.235559  193267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:46.239521  193267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.251212  193267 pod_ready.go:94] pod "coredns-66bc5c9577-s7kcb" is "Ready"
	I1008 22:59:47.251241  193267 pod_ready.go:86] duration metric: took 1.011692139s for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.254064  193267 pod_ready.go:83] waiting for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.259151  193267 pod_ready.go:94] pod "etcd-embed-certs-825429" is "Ready"
	I1008 22:59:47.259198  193267 pod_ready.go:86] duration metric: took 5.107579ms for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.261988  193267 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.267240  193267 pod_ready.go:94] pod "kube-apiserver-embed-certs-825429" is "Ready"
	I1008 22:59:47.267269  193267 pod_ready.go:86] duration metric: took 5.253944ms for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.269959  193267 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.443116  193267 pod_ready.go:94] pod "kube-controller-manager-embed-certs-825429" is "Ready"
	I1008 22:59:47.443144  193267 pod_ready.go:86] duration metric: took 173.158605ms for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:47.643484  193267 pod_ready.go:83] waiting for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.043576  193267 pod_ready.go:94] pod "kube-proxy-86wtc" is "Ready"
	I1008 22:59:48.043653  193267 pod_ready.go:86] duration metric: took 400.142079ms for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.242901  193267 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.643357  193267 pod_ready.go:94] pod "kube-scheduler-embed-certs-825429" is "Ready"
	I1008 22:59:48.643392  193267 pod_ready.go:86] duration metric: took 400.45574ms for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:48.643405  193267 pod_ready.go:40] duration metric: took 2.407814607s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:48.705771  193267 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:59:48.709138  193267 out.go:179] * Done! kubectl is now configured to use "embed-certs-825429" cluster and "default" namespace by default
	W1008 22:59:46.694731  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	W1008 22:59:49.188688  193942 node_ready.go:57] node "default-k8s-diff-port-779490" has "Ready":"False" status (will retry)
	I1008 22:59:50.188551  193942 node_ready.go:49] node "default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:50.188583  193942 node_ready.go:38] duration metric: took 41.00355039s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 22:59:50.188597  193942 api_server.go:52] waiting for apiserver process to appear ...
	I1008 22:59:50.188655  193942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:59:50.201942  193942 api_server.go:72] duration metric: took 42.198420099s to wait for apiserver process to appear ...
	I1008 22:59:50.201964  193942 api_server.go:88] waiting for apiserver healthz status ...
	I1008 22:59:50.201984  193942 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 22:59:50.211411  193942 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1008 22:59:50.213114  193942 api_server.go:141] control plane version: v1.34.1
	I1008 22:59:50.213137  193942 api_server.go:131] duration metric: took 11.166629ms to wait for apiserver health ...
	I1008 22:59:50.213146  193942 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 22:59:50.224131  193942 system_pods.go:59] 8 kube-system pods found
	I1008 22:59:50.224162  193942 system_pods.go:61] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:50.224169  193942 system_pods.go:61] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.224175  193942 system_pods.go:61] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.224180  193942 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.224184  193942 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.224189  193942 system_pods.go:61] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.224193  193942 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.224199  193942 system_pods.go:61] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:50.224205  193942 system_pods.go:74] duration metric: took 11.053348ms to wait for pod list to return data ...
	I1008 22:59:50.224212  193942 default_sa.go:34] waiting for default service account to be created ...
	I1008 22:59:50.239050  193942 default_sa.go:45] found service account: "default"
	I1008 22:59:50.239073  193942 default_sa.go:55] duration metric: took 14.855271ms for default service account to be created ...
	I1008 22:59:50.239083  193942 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 22:59:50.244512  193942 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:50.244542  193942 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:50.244548  193942 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.244555  193942 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.244559  193942 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.244563  193942 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.244569  193942 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.244573  193942 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.244579  193942 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:50.244599  193942 retry.go:31] will retry after 210.842736ms: missing components: kube-dns
	I1008 22:59:50.460342  193942 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:50.460372  193942 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 22:59:50.460379  193942 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.460386  193942 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.460391  193942 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.460396  193942 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.460400  193942 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.460404  193942 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.460409  193942 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1008 22:59:50.460436  193942 retry.go:31] will retry after 288.668809ms: missing components: kube-dns
	I1008 22:59:50.753537  193942 system_pods.go:86] 8 kube-system pods found
	I1008 22:59:50.753569  193942 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running
	I1008 22:59:50.753577  193942 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 22:59:50.753587  193942 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 22:59:50.753592  193942 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running
	I1008 22:59:50.753597  193942 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running
	I1008 22:59:50.753601  193942 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 22:59:50.753605  193942 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running
	I1008 22:59:50.753609  193942 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 22:59:50.753617  193942 system_pods.go:126] duration metric: took 514.5286ms to wait for k8s-apps to be running ...
	I1008 22:59:50.753625  193942 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 22:59:50.753720  193942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:59:50.766864  193942 system_svc.go:56] duration metric: took 13.229128ms WaitForService to wait for kubelet
	I1008 22:59:50.766894  193942 kubeadm.go:586] duration metric: took 42.763378089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 22:59:50.766913  193942 node_conditions.go:102] verifying NodePressure condition ...
	I1008 22:59:50.770047  193942 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 22:59:50.770076  193942 node_conditions.go:123] node cpu capacity is 2
	I1008 22:59:50.770088  193942 node_conditions.go:105] duration metric: took 3.169136ms to run NodePressure ...
	I1008 22:59:50.770101  193942 start.go:241] waiting for startup goroutines ...
	I1008 22:59:50.770109  193942 start.go:246] waiting for cluster config update ...
	I1008 22:59:50.770124  193942 start.go:255] writing updated cluster config ...
	I1008 22:59:50.770409  193942 ssh_runner.go:195] Run: rm -f paused
	I1008 22:59:50.774185  193942 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:50.778186  193942 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.783430  193942 pod_ready.go:94] pod "coredns-66bc5c9577-9xx2v" is "Ready"
	I1008 22:59:50.783509  193942 pod_ready.go:86] duration metric: took 5.295554ms for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.785842  193942 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.790504  193942 pod_ready.go:94] pod "etcd-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:50.790529  193942 pod_ready.go:86] duration metric: took 4.664391ms for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.793395  193942 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.798845  193942 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:50.798875  193942 pod_ready.go:86] duration metric: took 5.40753ms for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:50.801561  193942 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.178891  193942 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:51.178965  193942 pod_ready.go:86] duration metric: took 377.377505ms for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.381307  193942 pod_ready.go:83] waiting for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.778918  193942 pod_ready.go:94] pod "kube-proxy-jrvxc" is "Ready"
	I1008 22:59:51.778946  193942 pod_ready.go:86] duration metric: took 397.611153ms for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:51.979208  193942 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:52.378345  193942 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-779490" is "Ready"
	I1008 22:59:52.378373  193942 pod_ready.go:86] duration metric: took 399.13808ms for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 22:59:52.378386  193942 pod_ready.go:40] duration metric: took 1.604168122s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 22:59:52.434097  193942 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 22:59:52.437557  193942 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-779490" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 22:59:50 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:50.421590892Z" level=info msg="Created container 2580ac2f849062d712a6d121e32855b68c38bbf7bd926f4895058204a4d2868e: kube-system/coredns-66bc5c9577-9xx2v/coredns" id=e54191dd-5d34-4673-8638-13ba7c7a0f44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:59:50 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:50.422518509Z" level=info msg="Starting container: 2580ac2f849062d712a6d121e32855b68c38bbf7bd926f4895058204a4d2868e" id=af229959-7e51-4f4c-95d3-4f732de00974 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:59:50 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:50.424927062Z" level=info msg="Started container" PID=1714 containerID=2580ac2f849062d712a6d121e32855b68c38bbf7bd926f4895058204a4d2868e description=kube-system/coredns-66bc5c9577-9xx2v/coredns id=af229959-7e51-4f4c-95d3-4f732de00974 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d1f8dfa1d30baa5e62dc7d81e4b253adcdda10944366a2fa86da6ada05283a0
	Oct 08 22:59:52 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:52.98548439Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ab67af6d-d20d-4d02-9a23-8793cdfb1fe6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:59:52 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:52.985549876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:52 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:52.995279926Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a47c7e997cf73ef6ada18843983836491c3001858e27de218637a3849619a356 UID:8de3ed7b-63b8-4b8f-bc7c-4a46b11e83f6 NetNS:/var/run/netns/f1f545cf-1f94-4b5f-b38d-3811d6e589d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079520}] Aliases:map[]}"
	Oct 08 22:59:52 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:52.995320444Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.006368975Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a47c7e997cf73ef6ada18843983836491c3001858e27de218637a3849619a356 UID:8de3ed7b-63b8-4b8f-bc7c-4a46b11e83f6 NetNS:/var/run/netns/f1f545cf-1f94-4b5f-b38d-3811d6e589d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079520}] Aliases:map[]}"
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.006767265Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.011344697Z" level=info msg="Ran pod sandbox a47c7e997cf73ef6ada18843983836491c3001858e27de218637a3849619a356 with infra container: default/busybox/POD" id=ab67af6d-d20d-4d02-9a23-8793cdfb1fe6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.012743909Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ebdf897c-cbd1-4e34-9266-82c6ba766158 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.013016109Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ebdf897c-cbd1-4e34-9266-82c6ba766158 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.013125436Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ebdf897c-cbd1-4e34-9266-82c6ba766158 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.014488365Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=75ce03ba-72f8-43b8-aa46-c794fd2021d2 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:59:53 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:53.016576159Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 08 22:59:54 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:54.994866922Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=75ce03ba-72f8-43b8-aa46-c794fd2021d2 name=/runtime.v1.ImageService/PullImage
	Oct 08 22:59:54 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:54.995607722Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd8ab5db-68b4-4506-88d8-4abb020358b3 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:54 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:54.998006265Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9b97459d-4d92-4e8a-b3bb-fd2e33d7de60 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 22:59:55 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:55.010488783Z" level=info msg="Creating container: default/busybox/busybox" id=42db1cdb-49bc-4909-9c4f-dbbf0f5d4541 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:59:55 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:55.011456211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:55 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:55.016755219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:55 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:55.017300973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 22:59:55 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:55.034493691Z" level=info msg="Created container 849cf49611c141dfabc3a2d253b77c8860c8248e704109af12574c636e30443a: default/busybox/busybox" id=42db1cdb-49bc-4909-9c4f-dbbf0f5d4541 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 22:59:55 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:55.035296491Z" level=info msg="Starting container: 849cf49611c141dfabc3a2d253b77c8860c8248e704109af12574c636e30443a" id=76da18aa-3e0b-4419-88d1-5feee1c8f014 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 22:59:55 default-k8s-diff-port-779490 crio[834]: time="2025-10-08T22:59:55.037274319Z" level=info msg="Started container" PID=1774 containerID=849cf49611c141dfabc3a2d253b77c8860c8248e704109af12574c636e30443a description=default/busybox/busybox id=76da18aa-3e0b-4419-88d1-5feee1c8f014 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a47c7e997cf73ef6ada18843983836491c3001858e27de218637a3849619a356
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	849cf49611c14       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   a47c7e997cf73       busybox                                                default
	2580ac2f84906       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   2d1f8dfa1d30b       coredns-66bc5c9577-9xx2v                               kube-system
	e97335526d0c9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   e952791c6cebd       storage-provisioner                                    kube-system
	671eefaf95888       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      52 seconds ago       Running             kube-proxy                0                   c0ee83f107c3d       kube-proxy-jrvxc                                       kube-system
	9d7a8b70f9f91       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   382526a3fffaf       kindnet-9vmvl                                          kube-system
	17af9441a193e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   4163ef50ec374       kube-scheduler-default-k8s-diff-port-779490            kube-system
	1ab64ab64fd5d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   9684f9eca7931       kube-controller-manager-default-k8s-diff-port-779490   kube-system
	02ee5f953afb0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   f0e2f774c525f       etcd-default-k8s-diff-port-779490                      kube-system
	85a3e1c25c695       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   da06f070641ec       kube-apiserver-default-k8s-diff-port-779490            kube-system
	
	
	==> coredns [2580ac2f849062d712a6d121e32855b68c38bbf7bd926f4895058204a4d2868e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40435 - 21074 "HINFO IN 4818515434825198663.659415716455597265. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015882656s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-779490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-779490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=default-k8s-diff-port-779490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_59_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-779490
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 22:59:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 22:59:54 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 22:59:54 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 22:59:54 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 22:59:54 +0000   Wed, 08 Oct 2025 22:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-779490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d96f774690174f628f7f94b9149b8571
	  System UUID:                c1cdfe18-651a-4f09-abda-0497a79b449c
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-9xx2v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-779490                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-9vmvl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-779490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-779490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-jrvxc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-779490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-779490 event: Registered Node default-k8s-diff-port-779490 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-779490 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 8 22:30] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:31] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [02ee5f953afb0b184e3befbec3aa34993991cfbc2ffa78363a77985d9c6bf0bb] <==
	{"level":"warn","ts":"2025-10-08T22:58:57.789517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:57.828691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:57.962307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:57.974518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:58.027908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:58.062195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:58:58.366548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T22:59:00.266544Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.693394ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-08T22:59:00.266639Z","caller":"traceutil/trace.go:172","msg":"trace[1253218652] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:6; }","duration":"102.841392ms","start":"2025-10-08T22:59:00.163781Z","end":"2025-10-08T22:59:00.266622Z","steps":["trace[1253218652] 'agreement among raft nodes before linearized reading'  (duration: 81.755771ms)","trace[1253218652] 'range keys from in-memory index tree'  (duration: 20.905869ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-08T22:59:00.266933Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.901058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-08T22:59:00.266960Z","caller":"traceutil/trace.go:172","msg":"trace[1684373262] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:6; }","duration":"118.931927ms","start":"2025-10-08T22:59:00.148020Z","end":"2025-10-08T22:59:00.266952Z","steps":["trace[1684373262] 'agreement among raft nodes before linearized reading'  (duration: 97.535828ms)","trace[1684373262] 'range keys from in-memory index tree'  (duration: 21.353497ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-08T22:59:00.268139Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.180139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-779490\" limit:1 ","response":"range_response_count:1 size:4369"}
	{"level":"info","ts":"2025-10-08T22:59:00.268205Z","caller":"traceutil/trace.go:172","msg":"trace[1116477632] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-779490; range_end:; response_count:1; response_revision:6; }","duration":"120.250522ms","start":"2025-10-08T22:59:00.147935Z","end":"2025-10-08T22:59:00.268186Z","steps":["trace[1116477632] 'agreement among raft nodes before linearized reading'  (duration: 97.628309ms)","trace[1116477632] 'range keys from in-memory index tree'  (duration: 22.487697ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-08T22:59:00.268433Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.519802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-10-08T22:59:00.268464Z","caller":"traceutil/trace.go:172","msg":"trace[2042781099] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:6; }","duration":"120.554797ms","start":"2025-10-08T22:59:00.147896Z","end":"2025-10-08T22:59:00.268451Z","steps":["trace[2042781099] 'agreement among raft nodes before linearized reading'  (duration: 97.676039ms)","trace[2042781099] 'range keys from in-memory index tree'  (duration: 22.829699ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-08T22:59:00.273014Z","caller":"traceutil/trace.go:172","msg":"trace[569303803] transaction","detail":"{read_only:false; number_of_response:0; response_revision:6; }","duration":"140.750053ms","start":"2025-10-08T22:59:00.132250Z","end":"2025-10-08T22:59:00.273000Z","steps":["trace[569303803] 'process raft request'  (duration: 113.347336ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.273289Z","caller":"traceutil/trace.go:172","msg":"trace[954173335] transaction","detail":"{read_only:false; response_revision:7; number_of_response:1; }","duration":"142.286186ms","start":"2025-10-08T22:59:00.130990Z","end":"2025-10-08T22:59:00.273277Z","steps":["trace[954173335] 'process raft request'  (duration: 139.165829ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.273483Z","caller":"traceutil/trace.go:172","msg":"trace[1816924947] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"126.273493ms","start":"2025-10-08T22:59:00.147200Z","end":"2025-10-08T22:59:00.273473Z","steps":["trace[1816924947] 'process raft request'  (duration: 123.057316ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.273682Z","caller":"traceutil/trace.go:172","msg":"trace[516830823] transaction","detail":"{read_only:false; response_revision:9; number_of_response:1; }","duration":"126.401716ms","start":"2025-10-08T22:59:00.147271Z","end":"2025-10-08T22:59:00.273673Z","steps":["trace[516830823] 'process raft request'  (duration: 123.016307ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.284052Z","caller":"traceutil/trace.go:172","msg":"trace[1592668366] transaction","detail":"{read_only:false; response_revision:15; number_of_response:1; }","duration":"130.768192ms","start":"2025-10-08T22:59:00.153254Z","end":"2025-10-08T22:59:00.284022Z","steps":["trace[1592668366] 'process raft request'  (duration: 119.471961ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.284833Z","caller":"traceutil/trace.go:172","msg":"trace[1791255608] transaction","detail":"{read_only:false; response_revision:10; number_of_response:1; }","duration":"137.417058ms","start":"2025-10-08T22:59:00.147398Z","end":"2025-10-08T22:59:00.284815Z","steps":["trace[1791255608] 'process raft request'  (duration: 122.912617ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.285024Z","caller":"traceutil/trace.go:172","msg":"trace[1123434698] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"137.577437ms","start":"2025-10-08T22:59:00.147437Z","end":"2025-10-08T22:59:00.285015Z","steps":["trace[1123434698] 'process raft request'  (duration: 122.908818ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.285118Z","caller":"traceutil/trace.go:172","msg":"trace[2141552162] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"137.648691ms","start":"2025-10-08T22:59:00.147461Z","end":"2025-10-08T22:59:00.285110Z","steps":["trace[2141552162] 'process raft request'  (duration: 122.905397ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.285201Z","caller":"traceutil/trace.go:172","msg":"trace[333285829] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"137.705906ms","start":"2025-10-08T22:59:00.147488Z","end":"2025-10-08T22:59:00.285194Z","steps":["trace[333285829] 'process raft request'  (duration: 122.897125ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-08T22:59:00.285371Z","caller":"traceutil/trace.go:172","msg":"trace[1193991274] transaction","detail":"{read_only:false; response_revision:14; number_of_response:1; }","duration":"137.530987ms","start":"2025-10-08T22:59:00.147831Z","end":"2025-10-08T22:59:00.285362Z","steps":["trace[1193991274] 'process raft request'  (duration: 124.819192ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:00:03 up  1:42,  0 user,  load average: 1.94, 1.73, 1.74
	Linux default-k8s-diff-port-779490 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9d7a8b70f9f919af1c180b74d85c4149b3d42625c41833bc42067ae6bafe17da] <==
	I1008 22:59:09.412547       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 22:59:09.502609       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 22:59:09.502749       1 main.go:148] setting mtu 1500 for CNI 
	I1008 22:59:09.502768       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 22:59:09.502784       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T22:59:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 22:59:09.705136       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 22:59:09.705154       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 22:59:09.705162       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 22:59:09.705835       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 22:59:39.705569       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 22:59:39.705570       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 22:59:39.705858       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 22:59:39.706826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 22:59:41.205267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 22:59:41.205303       1 metrics.go:72] Registering metrics
	I1008 22:59:41.205377       1 controller.go:711] "Syncing nftables rules"
	I1008 22:59:49.712448       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:59:49.712508       1 main.go:301] handling current node
	I1008 22:59:59.708818       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 22:59:59.708972       1 main.go:301] handling current node
	
	
	==> kube-apiserver [85a3e1c25c69562b3b340af7f656e72bd2d2541706109bad9d324e7a323c3713] <==
	I1008 22:59:00.090266       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 22:59:00.090272       1 cache.go:39] Caches are synced for autoregister controller
	I1008 22:59:00.149468       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 22:59:00.321866       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:59:00.327493       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1008 22:59:00.380449       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:59:00.387099       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 22:59:00.745522       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1008 22:59:00.755063       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1008 22:59:00.755089       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 22:59:01.786364       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 22:59:01.880501       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 22:59:01.999742       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 22:59:02.012902       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1008 22:59:02.014372       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 22:59:02.020667       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 22:59:02.857888       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 22:59:03.070771       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 22:59:03.113749       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 22:59:03.139953       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 22:59:08.707417       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:59:08.717495       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 22:59:08.882447       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 22:59:08.937599       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1008 23:00:01.131112       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:51806: use of closed network connection
	
	
	==> kube-controller-manager [1ab64ab64fd5d4ca177400fbf6c4d3452746076d172a0aadf513ad4f66ed091a] <==
	I1008 22:59:07.865518       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 22:59:07.874830       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 22:59:07.876081       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 22:59:07.882511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:59:07.893067       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 22:59:07.893177       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 22:59:07.893252       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-779490"
	I1008 22:59:07.893344       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 22:59:07.894174       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 22:59:07.895336       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 22:59:07.901147       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 22:59:07.901199       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 22:59:07.901398       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 22:59:07.904435       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 22:59:07.904489       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 22:59:07.904660       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1008 22:59:07.908193       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 22:59:07.908293       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 22:59:07.908826       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 22:59:07.920258       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1008 22:59:07.920514       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1008 22:59:07.928424       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 22:59:07.928448       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 22:59:07.928455       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1008 22:59:52.900139       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [671eefaf9588805a9e621f27d178f65b36fe507b88c28f259717a682c4c84784] <==
	I1008 22:59:10.941051       1 server_linux.go:53] "Using iptables proxy"
	I1008 22:59:11.035148       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 22:59:11.135454       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 22:59:11.135489       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 22:59:11.135554       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 22:59:11.156412       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 22:59:11.156537       1 server_linux.go:132] "Using iptables Proxier"
	I1008 22:59:11.161596       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 22:59:11.162027       1 server.go:527] "Version info" version="v1.34.1"
	I1008 22:59:11.162218       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 22:59:11.163657       1 config.go:200] "Starting service config controller"
	I1008 22:59:11.163721       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 22:59:11.163782       1 config.go:106] "Starting endpoint slice config controller"
	I1008 22:59:11.163812       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 22:59:11.163849       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 22:59:11.163876       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 22:59:11.164566       1 config.go:309] "Starting node config controller"
	I1008 22:59:11.164631       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 22:59:11.164662       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 22:59:11.263915       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 22:59:11.263916       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 22:59:11.263957       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [17af9441a193e1e5b5ca236db62a9ced0b3c68f633f7a6c6fed98cc48812a75d] <==
	E1008 22:59:00.235237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:59:00.235884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 22:59:00.236004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:59:00.236091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 22:59:00.236199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 22:59:00.236284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 22:59:00.236369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 22:59:00.236446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 22:59:00.236527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 22:59:00.236612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 22:59:00.236691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 22:59:00.247228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 22:59:00.247323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 22:59:00.247420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 22:59:00.247508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 22:59:00.247588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 22:59:00.247696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 22:59:00.284509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 22:59:01.112220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 22:59:01.207989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 22:59:01.221478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 22:59:01.248304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1008 22:59:01.257765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 22:59:01.332415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1008 22:59:04.066917       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 22:59:07 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:07.931276    1289 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: E1008 22:59:09.048059    1289 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-779490\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-779490' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.105839    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fddc70f-a214-4909-ae97-566094420ce0-xtables-lock\") pod \"kindnet-9vmvl\" (UID: \"7fddc70f-a214-4909-ae97-566094420ce0\") " pod="kube-system/kindnet-9vmvl"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.105902    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frz7n\" (UniqueName: \"kubernetes.io/projected/7fddc70f-a214-4909-ae97-566094420ce0-kube-api-access-frz7n\") pod \"kindnet-9vmvl\" (UID: \"7fddc70f-a214-4909-ae97-566094420ce0\") " pod="kube-system/kindnet-9vmvl"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.105933    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7fddc70f-a214-4909-ae97-566094420ce0-cni-cfg\") pod \"kindnet-9vmvl\" (UID: \"7fddc70f-a214-4909-ae97-566094420ce0\") " pod="kube-system/kindnet-9vmvl"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.105967    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fddc70f-a214-4909-ae97-566094420ce0-lib-modules\") pod \"kindnet-9vmvl\" (UID: \"7fddc70f-a214-4909-ae97-566094420ce0\") " pod="kube-system/kindnet-9vmvl"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.105985    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cbffb55c-72e0-4086-b82a-f59db471adf4-kube-proxy\") pod \"kube-proxy-jrvxc\" (UID: \"cbffb55c-72e0-4086-b82a-f59db471adf4\") " pod="kube-system/kube-proxy-jrvxc"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.106003    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbffb55c-72e0-4086-b82a-f59db471adf4-xtables-lock\") pod \"kube-proxy-jrvxc\" (UID: \"cbffb55c-72e0-4086-b82a-f59db471adf4\") " pod="kube-system/kube-proxy-jrvxc"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.106019    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswfd\" (UniqueName: \"kubernetes.io/projected/cbffb55c-72e0-4086-b82a-f59db471adf4-kube-api-access-dswfd\") pod \"kube-proxy-jrvxc\" (UID: \"cbffb55c-72e0-4086-b82a-f59db471adf4\") " pod="kube-system/kube-proxy-jrvxc"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.106040    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbffb55c-72e0-4086-b82a-f59db471adf4-lib-modules\") pod \"kube-proxy-jrvxc\" (UID: \"cbffb55c-72e0-4086-b82a-f59db471adf4\") " pod="kube-system/kube-proxy-jrvxc"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.227923    1289 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 08 22:59:09 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:09.777466    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9vmvl" podStartSLOduration=1.7774456239999998 podStartE2EDuration="1.777445624s" podCreationTimestamp="2025-10-08 22:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:09.498276401 +0000 UTC m=+6.536815302" watchObservedRunningTime="2025-10-08 22:59:09.777445624 +0000 UTC m=+6.815984533"
	Oct 08 22:59:10 default-k8s-diff-port-779490 kubelet[1289]: E1008 22:59:10.209005    1289 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:10 default-k8s-diff-port-779490 kubelet[1289]: E1008 22:59:10.209159    1289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cbffb55c-72e0-4086-b82a-f59db471adf4-kube-proxy podName:cbffb55c-72e0-4086-b82a-f59db471adf4 nodeName:}" failed. No retries permitted until 2025-10-08 22:59:10.709122798 +0000 UTC m=+7.747661691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/cbffb55c-72e0-4086-b82a-f59db471adf4-kube-proxy") pod "kube-proxy-jrvxc" (UID: "cbffb55c-72e0-4086-b82a-f59db471adf4") : failed to sync configmap cache: timed out waiting for the condition
	Oct 08 22:59:12 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:12.431538    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jrvxc" podStartSLOduration=4.431519213 podStartE2EDuration="4.431519213s" podCreationTimestamp="2025-10-08 22:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:11.501785636 +0000 UTC m=+8.540324537" watchObservedRunningTime="2025-10-08 22:59:12.431519213 +0000 UTC m=+9.470058115"
	Oct 08 22:59:49 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:49.980438    1289 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 08 22:59:50 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:50.105282    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-876xn\" (UniqueName: \"kubernetes.io/projected/6311a0df-659e-42b5-a6ea-a6802aa5c5bc-kube-api-access-876xn\") pod \"coredns-66bc5c9577-9xx2v\" (UID: \"6311a0df-659e-42b5-a6ea-a6802aa5c5bc\") " pod="kube-system/coredns-66bc5c9577-9xx2v"
	Oct 08 22:59:50 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:50.105335    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6311a0df-659e-42b5-a6ea-a6802aa5c5bc-config-volume\") pod \"coredns-66bc5c9577-9xx2v\" (UID: \"6311a0df-659e-42b5-a6ea-a6802aa5c5bc\") " pod="kube-system/coredns-66bc5c9577-9xx2v"
	Oct 08 22:59:50 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:50.105368    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/45961cee-2d6e-4219-bff8-34050548a8b0-tmp\") pod \"storage-provisioner\" (UID: \"45961cee-2d6e-4219-bff8-34050548a8b0\") " pod="kube-system/storage-provisioner"
	Oct 08 22:59:50 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:50.105387    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2kf\" (UniqueName: \"kubernetes.io/projected/45961cee-2d6e-4219-bff8-34050548a8b0-kube-api-access-8f2kf\") pod \"storage-provisioner\" (UID: \"45961cee-2d6e-4219-bff8-34050548a8b0\") " pod="kube-system/storage-provisioner"
	Oct 08 22:59:50 default-k8s-diff-port-779490 kubelet[1289]: W1008 22:59:50.346587    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/crio-e952791c6cebd767be16e037d72febaf17415a61af84194793c294f48d457ebd WatchSource:0}: Error finding container e952791c6cebd767be16e037d72febaf17415a61af84194793c294f48d457ebd: Status 404 returned error can't find the container with id e952791c6cebd767be16e037d72febaf17415a61af84194793c294f48d457ebd
	Oct 08 22:59:50 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:50.610130    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9xx2v" podStartSLOduration=41.610109283 podStartE2EDuration="41.610109283s" podCreationTimestamp="2025-10-08 22:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:50.594565331 +0000 UTC m=+47.633104240" watchObservedRunningTime="2025-10-08 22:59:50.610109283 +0000 UTC m=+47.648648176"
	Oct 08 22:59:50 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:50.627112    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.627091398 podStartE2EDuration="41.627091398s" podCreationTimestamp="2025-10-08 22:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 22:59:50.612018992 +0000 UTC m=+47.650557893" watchObservedRunningTime="2025-10-08 22:59:50.627091398 +0000 UTC m=+47.665630291"
	Oct 08 22:59:52 default-k8s-diff-port-779490 kubelet[1289]: I1008 22:59:52.727208    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnc4f\" (UniqueName: \"kubernetes.io/projected/8de3ed7b-63b8-4b8f-bc7c-4a46b11e83f6-kube-api-access-rnc4f\") pod \"busybox\" (UID: \"8de3ed7b-63b8-4b8f-bc7c-4a46b11e83f6\") " pod="default/busybox"
	Oct 08 22:59:53 default-k8s-diff-port-779490 kubelet[1289]: W1008 22:59:53.009457    1289 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/crio-a47c7e997cf73ef6ada18843983836491c3001858e27de218637a3849619a356 WatchSource:0}: Error finding container a47c7e997cf73ef6ada18843983836491c3001858e27de218637a3849619a356: Status 404 returned error can't find the container with id a47c7e997cf73ef6ada18843983836491c3001858e27de218637a3849619a356
	
	
	==> storage-provisioner [e97335526d0c97628626e0d76a2cd762327e6f1a9748ebef5520371e608bfa96] <==
	I1008 22:59:50.443272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 22:59:50.461800       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 22:59:50.461937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 22:59:50.465008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:50.489963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:59:50.490164       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 22:59:50.490682       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1621d9ca-2fb2-43ad-b54a-b562c4b49118", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-779490_ded92c48-0f36-491b-ac75-f9082cf15c3f became leader
	I1008 22:59:50.490819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-779490_ded92c48-0f36-491b-ac75-f9082cf15c3f!
	W1008 22:59:50.498375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:50.503479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 22:59:50.597762       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-779490_ded92c48-0f36-491b-ac75-f9082cf15c3f!
	W1008 22:59:52.507143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:52.512789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:54.517712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:54.522354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:56.525257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:56.530074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:58.533230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 22:59:58.540185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:00.555261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:00.568859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:02.574884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:00:02.588983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-779490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-779490 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-779490 --alsologtostderr -v=1: exit status 80 (2.607415677s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-779490 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 23:01:23.514764  204602 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:01:23.514971  204602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:01:23.514988  204602 out.go:374] Setting ErrFile to fd 2...
	I1008 23:01:23.514995  204602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:01:23.515325  204602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:01:23.515636  204602 out.go:368] Setting JSON to false
	I1008 23:01:23.515677  204602 mustload.go:65] Loading cluster: default-k8s-diff-port-779490
	I1008 23:01:23.516104  204602 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:01:23.516642  204602 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:01:23.536714  204602 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:01:23.537145  204602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:01:23.603854  204602 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-08 23:01:23.590463352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:01:23.604518  204602 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-779490 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1008 23:01:23.606148  204602 out.go:179] * Pausing node default-k8s-diff-port-779490 ... 
	I1008 23:01:23.607503  204602 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:01:23.607836  204602 ssh_runner.go:195] Run: systemctl --version
	I1008 23:01:23.607882  204602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:01:23.627084  204602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:01:23.733816  204602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:01:23.756000  204602 pause.go:52] kubelet running: true
	I1008 23:01:23.756078  204602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:01:24.053109  204602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:01:24.053191  204602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:01:24.148425  204602 cri.go:89] found id: "f53fecc8b57f03ffafccaf27e308d0f2475f20d0a79b800e28025b87e8e9f33d"
	I1008 23:01:24.148457  204602 cri.go:89] found id: "e5d915946b8ea944e37566f7106abac224ef11871f731d856aaf37c2bac231dd"
	I1008 23:01:24.148464  204602 cri.go:89] found id: "1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565"
	I1008 23:01:24.148468  204602 cri.go:89] found id: "4a200e7e0c4c7fa3195d199b8f5e47922f16fe844523cd9c5eb8cb9c5b3a5f92"
	I1008 23:01:24.148472  204602 cri.go:89] found id: "8a7be09e8d3357ea5b26e1774372d50014be3d5c01add4f9434273ec80f5272e"
	I1008 23:01:24.148476  204602 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:01:24.148479  204602 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:01:24.148482  204602 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:01:24.148485  204602 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:01:24.148491  204602 cri.go:89] found id: "4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	I1008 23:01:24.148494  204602 cri.go:89] found id: "278e35cc7fbccaf5c63b64c560388a6a30f3774aced449276cff7421f19bcdfb"
	I1008 23:01:24.148497  204602 cri.go:89] found id: ""
	I1008 23:01:24.148579  204602 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:01:24.172804  204602 retry.go:31] will retry after 191.970469ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:24Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:01:24.365271  204602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:01:24.378269  204602 pause.go:52] kubelet running: false
	I1008 23:01:24.378340  204602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:01:24.578106  204602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:01:24.578213  204602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:01:24.659634  204602 cri.go:89] found id: "f53fecc8b57f03ffafccaf27e308d0f2475f20d0a79b800e28025b87e8e9f33d"
	I1008 23:01:24.659659  204602 cri.go:89] found id: "e5d915946b8ea944e37566f7106abac224ef11871f731d856aaf37c2bac231dd"
	I1008 23:01:24.659665  204602 cri.go:89] found id: "1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565"
	I1008 23:01:24.659669  204602 cri.go:89] found id: "4a200e7e0c4c7fa3195d199b8f5e47922f16fe844523cd9c5eb8cb9c5b3a5f92"
	I1008 23:01:24.659672  204602 cri.go:89] found id: "8a7be09e8d3357ea5b26e1774372d50014be3d5c01add4f9434273ec80f5272e"
	I1008 23:01:24.659676  204602 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:01:24.659680  204602 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:01:24.659683  204602 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:01:24.659686  204602 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:01:24.659692  204602 cri.go:89] found id: "4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	I1008 23:01:24.659696  204602 cri.go:89] found id: "278e35cc7fbccaf5c63b64c560388a6a30f3774aced449276cff7421f19bcdfb"
	I1008 23:01:24.659699  204602 cri.go:89] found id: ""
	I1008 23:01:24.659752  204602 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:01:24.671231  204602 retry.go:31] will retry after 212.762582ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:24Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:01:24.884707  204602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:01:24.898497  204602 pause.go:52] kubelet running: false
	I1008 23:01:24.898563  204602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:01:25.089236  204602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:01:25.089317  204602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:01:25.193322  204602 cri.go:89] found id: "f53fecc8b57f03ffafccaf27e308d0f2475f20d0a79b800e28025b87e8e9f33d"
	I1008 23:01:25.193342  204602 cri.go:89] found id: "e5d915946b8ea944e37566f7106abac224ef11871f731d856aaf37c2bac231dd"
	I1008 23:01:25.193347  204602 cri.go:89] found id: "1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565"
	I1008 23:01:25.193351  204602 cri.go:89] found id: "4a200e7e0c4c7fa3195d199b8f5e47922f16fe844523cd9c5eb8cb9c5b3a5f92"
	I1008 23:01:25.193354  204602 cri.go:89] found id: "8a7be09e8d3357ea5b26e1774372d50014be3d5c01add4f9434273ec80f5272e"
	I1008 23:01:25.193358  204602 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:01:25.193361  204602 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:01:25.193364  204602 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:01:25.193367  204602 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:01:25.193373  204602 cri.go:89] found id: "4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	I1008 23:01:25.193376  204602 cri.go:89] found id: "278e35cc7fbccaf5c63b64c560388a6a30f3774aced449276cff7421f19bcdfb"
	I1008 23:01:25.193379  204602 cri.go:89] found id: ""
	I1008 23:01:25.193423  204602 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:01:25.205190  204602 retry.go:31] will retry after 485.440677ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:25Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:01:25.691493  204602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:01:25.708564  204602 pause.go:52] kubelet running: false
	I1008 23:01:25.708628  204602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:01:25.932435  204602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:01:25.932524  204602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:01:26.028401  204602 cri.go:89] found id: "f53fecc8b57f03ffafccaf27e308d0f2475f20d0a79b800e28025b87e8e9f33d"
	I1008 23:01:26.028427  204602 cri.go:89] found id: "e5d915946b8ea944e37566f7106abac224ef11871f731d856aaf37c2bac231dd"
	I1008 23:01:26.028433  204602 cri.go:89] found id: "1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565"
	I1008 23:01:26.028438  204602 cri.go:89] found id: "4a200e7e0c4c7fa3195d199b8f5e47922f16fe844523cd9c5eb8cb9c5b3a5f92"
	I1008 23:01:26.028441  204602 cri.go:89] found id: "8a7be09e8d3357ea5b26e1774372d50014be3d5c01add4f9434273ec80f5272e"
	I1008 23:01:26.028445  204602 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:01:26.028448  204602 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:01:26.028451  204602 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:01:26.028453  204602 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:01:26.028459  204602 cri.go:89] found id: "4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	I1008 23:01:26.028463  204602 cri.go:89] found id: "278e35cc7fbccaf5c63b64c560388a6a30f3774aced449276cff7421f19bcdfb"
	I1008 23:01:26.028466  204602 cri.go:89] found id: ""
	I1008 23:01:26.028515  204602 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:01:26.044697  204602 out.go:203] 
	W1008 23:01:26.047938  204602 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 23:01:26.047960  204602 out.go:285] * 
	* 
	W1008 23:01:26.053759  204602 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 23:01:26.057025  204602 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-779490 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-779490
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-779490:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca",
	        "Created": "2025-10-08T22:58:32.369538297Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T23:00:17.439390742Z",
	            "FinishedAt": "2025-10-08T23:00:16.44928857Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/hosts",
	        "LogPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca-json.log",
	        "Name": "/default-k8s-diff-port-779490",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-779490:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-779490",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca",
	                "LowerDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-779490",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-779490/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-779490",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-779490",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-779490",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62aa5cf728b093259a42ddeecdf7f43b5829a55eccf96dbfca4179b5d0f8f50a",
	            "SandboxKey": "/var/run/docker/netns/62aa5cf728b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-779490": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:0c:9a:73:4b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95a85807530ab25d32d1815ee29ce3fc904bd88d88973d6a88e562431efd0d87",
	                    "EndpointID": "b6f44dc4c2ca4f239fa8f920ac2d819b01a2db9731989602123c4a8ea7a4610f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-779490",
	                        "74faf5bf01ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490: exit status 2 (381.077636ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-779490 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-779490 logs -n 25: (1.646618128s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                               │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                             │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-036919                                                                                                                                          │ disable-driver-mounts-036919 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	│ stop    │ -p embed-certs-825429 --alsologtostderr -v=3                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ stop    │ -p default-k8s-diff-port-779490 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-779490 image list --format=json                                                                                                                    │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-779490 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ image   │ embed-certs-825429 image list --format=json                                                                                                                              │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p embed-certs-825429 --alsologtostderr -v=1                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 23:00:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 23:00:17.163938  200735 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:00:17.164058  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164070  200735 out.go:374] Setting ErrFile to fd 2...
	I1008 23:00:17.164076  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164320  200735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:00:17.164684  200735 out.go:368] Setting JSON to false
	I1008 23:00:17.165518  200735 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6168,"bootTime":1759958250,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 23:00:17.165584  200735 start.go:141] virtualization:  
	I1008 23:00:17.170349  200735 out.go:179] * [default-k8s-diff-port-779490] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 23:00:17.173550  200735 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 23:00:17.173606  200735 notify.go:220] Checking for updates...
	I1008 23:00:17.179549  200735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 23:00:17.182394  200735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:17.185318  200735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 23:00:17.188242  200735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 23:00:17.191227  200735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 23:00:17.194561  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:17.195186  200735 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 23:00:17.221784  200735 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 23:00:17.221965  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.290959  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.282099792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.291074  200735 docker.go:318] overlay module found
	I1008 23:00:17.294262  200735 out.go:179] * Using the docker driver based on existing profile
	I1008 23:00:17.297119  200735 start.go:305] selected driver: docker
	I1008 23:00:17.297140  200735 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.297251  200735 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 23:00:17.298023  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.356048  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.346390453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.356372  200735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:17.356415  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:17.356471  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:17.356518  200735 start.go:349] cluster config:
	{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.359826  200735 out.go:179] * Starting "default-k8s-diff-port-779490" primary control-plane node in "default-k8s-diff-port-779490" cluster
	I1008 23:00:17.362672  200735 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 23:00:17.365466  200735 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 23:00:17.368335  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:17.368364  200735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 23:00:17.368384  200735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 23:00:17.368391  200735 cache.go:58] Caching tarball of preloaded images
	I1008 23:00:17.368477  200735 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 23:00:17.368487  200735 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 23:00:17.368593  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.387741  200735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 23:00:17.387766  200735 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 23:00:17.387788  200735 cache.go:232] Successfully downloaded all kic artifacts
	I1008 23:00:17.387813  200735 start.go:360] acquireMachinesLock for default-k8s-diff-port-779490: {Name:mkf9138008d7ef2884518c448a03b33b088d9068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 23:00:17.387870  200735 start.go:364] duration metric: took 34.314µs to acquireMachinesLock for "default-k8s-diff-port-779490"
	I1008 23:00:17.387894  200735 start.go:96] Skipping create...Using existing machine configuration
	I1008 23:00:17.387906  200735 fix.go:54] fixHost starting: 
	I1008 23:00:17.388165  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.405667  200735 fix.go:112] recreateIfNeeded on default-k8s-diff-port-779490: state=Stopped err=<nil>
	W1008 23:00:17.405698  200735 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 23:00:16.057868  200074 out.go:252] * Restarting existing docker container for "embed-certs-825429" ...
	I1008 23:00:16.057965  200074 cli_runner.go:164] Run: docker start embed-certs-825429
	I1008 23:00:16.315950  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:16.335815  200074 kic.go:430] container "embed-certs-825429" state is running.
	I1008 23:00:16.336208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:16.356036  200074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/config.json ...
	I1008 23:00:16.356262  200074 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:16.356315  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:16.378830  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:16.379148  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:16.379157  200074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:16.380409  200074 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59024->127.0.0.1:33081: read: connection reset by peer
	I1008 23:00:19.529381  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.529407  200074 ubuntu.go:182] provisioning hostname "embed-certs-825429"
	I1008 23:00:19.529470  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.548688  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.549089  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.549126  200074 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825429 && echo "embed-certs-825429" | sudo tee /etc/hostname
	I1008 23:00:19.704942  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.705029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.723786  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.724093  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.724110  200074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825429' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825429/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825429' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:19.870310  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:19.870379  200074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:19.870406  200074 ubuntu.go:190] setting up certificates
	I1008 23:00:19.870417  200074 provision.go:84] configureAuth start
	I1008 23:00:19.870501  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:19.888221  200074 provision.go:143] copyHostCerts
	I1008 23:00:19.888292  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:19.888316  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:19.888394  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:19.888499  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:19.888508  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:19.888537  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:19.888603  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:19.888615  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:19.888643  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:19.888697  200074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825429 san=[127.0.0.1 192.168.76.2 embed-certs-825429 localhost minikube]
	I1008 23:00:17.408820  200735 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-779490" ...
	I1008 23:00:17.408898  200735 cli_runner.go:164] Run: docker start default-k8s-diff-port-779490
	I1008 23:00:17.666806  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.691387  200735 kic.go:430] container "default-k8s-diff-port-779490" state is running.
	I1008 23:00:17.691764  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:17.715368  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.715595  200735 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:17.715865  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:17.740298  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:17.740619  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:17.740636  200735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:17.741357  200735 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 23:00:20.909388  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:20.909415  200735 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-779490"
	I1008 23:00:20.909477  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:20.926770  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:20.927074  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:20.927096  200735 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-779490 && echo "default-k8s-diff-port-779490" | sudo tee /etc/hostname
	I1008 23:00:21.093286  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:21.093383  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:21.122816  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.123125  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:21.123144  200735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-779490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-779490/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-779490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:21.274338  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:21.274367  200735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:21.274399  200735 ubuntu.go:190] setting up certificates
	I1008 23:00:21.274412  200735 provision.go:84] configureAuth start
	I1008 23:00:21.274479  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:21.301901  200735 provision.go:143] copyHostCerts
	I1008 23:00:21.301972  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:21.301995  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:21.302061  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:21.302175  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:21.302187  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:21.302212  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:21.302280  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:21.302297  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:21.302320  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:21.302377  200735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-779490 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-779490 localhost minikube]
	I1008 23:00:22.045829  200735 provision.go:177] copyRemoteCerts
	I1008 23:00:22.045958  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:22.046043  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.065464  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:20.814951  200074 provision.go:177] copyRemoteCerts
	I1008 23:00:20.815017  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:20.815059  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:20.834587  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:20.947002  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:20.966672  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 23:00:20.987841  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:00:21.017825  200074 provision.go:87] duration metric: took 1.147384041s to configureAuth
	I1008 23:00:21.017855  200074 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:21.018073  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:21.018178  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.038971  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.039282  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:21.039304  200074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:21.410917  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:21.410937  200074 machine.go:96] duration metric: took 5.054666132s to provisionDockerMachine
	I1008 23:00:21.410948  200074 start.go:293] postStartSetup for "embed-certs-825429" (driver="docker")
	I1008 23:00:21.410958  200074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:21.411025  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:21.411063  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.439350  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.543094  200074 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:21.547406  200074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:21.547435  200074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:21.547450  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:21.547507  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:21.547597  200074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:21.547700  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:21.556609  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:21.585243  200074 start.go:296] duration metric: took 174.278532ms for postStartSetup
	I1008 23:00:21.585334  200074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:21.585378  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.621333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.735318  200074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:21.743106  200074 fix.go:56] duration metric: took 5.706738194s for fixHost
	I1008 23:00:21.743134  200074 start.go:83] releasing machines lock for "embed-certs-825429", held for 5.70679646s
	I1008 23:00:21.743208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:21.767422  200074 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:21.767474  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.767704  200074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:21.767778  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.807518  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.808257  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:22.023792  200074 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:22.032065  200074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:22.086835  200074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:22.095791  200074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:22.095870  200074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:22.106263  200074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:22.106289  200074 start.go:495] detecting cgroup driver to use...
	I1008 23:00:22.106323  200074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:22.106377  200074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:22.126344  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:22.142497  200074 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:22.142563  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:22.158960  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:22.174798  200074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:22.323493  200074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:22.466670  200074 docker.go:234] disabling docker service ...
	I1008 23:00:22.466740  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:22.483900  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:22.498887  200074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:22.646149  200074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:22.804808  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:22.821564  200074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:22.839222  200074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:22.839285  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.851109  200074 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:22.851182  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.863916  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.878286  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.887691  200074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:22.897074  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.909548  200074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.919602  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.930018  200074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:22.938657  200074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:22.946980  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.134756  200074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:23.291036  200074 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:23.291115  200074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:23.295899  200074 start.go:563] Will wait 60s for crictl version
	I1008 23:00:23.295972  200074 ssh_runner.go:195] Run: which crictl
	I1008 23:00:23.300513  200074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:23.339721  200074 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:23.339809  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.382887  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.427225  200074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:22.179705  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:22.201073  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 23:00:22.231111  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 23:00:22.265814  200735 provision.go:87] duration metric: took 991.378792ms to configureAuth
	I1008 23:00:22.265882  200735 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:22.266132  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:22.266293  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.285804  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:22.286122  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:22.286137  200735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:22.656376  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:22.656462  200735 machine.go:96] duration metric: took 4.940857891s to provisionDockerMachine
	I1008 23:00:22.656490  200735 start.go:293] postStartSetup for "default-k8s-diff-port-779490" (driver="docker")
	I1008 23:00:22.656532  200735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:22.656635  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:22.656703  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.681602  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.795033  200735 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:22.799606  200735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:22.799632  200735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:22.799644  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:22.799704  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:22.799788  200735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:22.799891  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:22.809604  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:22.832880  200735 start.go:296] duration metric: took 176.344915ms for postStartSetup
	I1008 23:00:22.833082  200735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:22.833170  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.857779  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.964061  200735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:22.969468  200735 fix.go:56] duration metric: took 5.581560799s for fixHost
	I1008 23:00:22.969491  200735 start.go:83] releasing machines lock for "default-k8s-diff-port-779490", held for 5.581607766s
	I1008 23:00:22.969557  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:22.988681  200735 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:22.988742  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.988958  200735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:22.989020  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:23.026248  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.043081  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.248291  200735 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:23.255759  200735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:23.326213  200735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:23.335019  200735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:23.335098  200735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:23.344495  200735 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:23.344539  200735 start.go:495] detecting cgroup driver to use...
	I1008 23:00:23.344575  200735 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:23.344639  200735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:23.367326  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:23.380944  200735 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:23.381008  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:23.398756  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:23.412634  200735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:23.559101  200735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:23.743425  200735 docker.go:234] disabling docker service ...
	I1008 23:00:23.743510  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:23.767092  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:23.784102  200735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:23.992289  200735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:24.197499  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:24.213564  200735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:24.241135  200735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:24.241200  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.259960  200735 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:24.260094  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.270690  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.284851  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.296200  200735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:24.304654  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.313931  200735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.322480  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.333103  200735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:24.342318  200735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:24.350381  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:24.494463  200735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:24.666167  200735 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:24.666337  200735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:24.670699  200735 start.go:563] Will wait 60s for crictl version
	I1008 23:00:24.670769  200735 ssh_runner.go:195] Run: which crictl
	I1008 23:00:24.674726  200735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:24.721851  200735 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:24.721939  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.775722  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.813408  200735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:23.430030  200074 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:23.456528  200074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:23.460989  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.482225  200074 kubeadm.go:883] updating cluster {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:23.482358  200074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:23.482421  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.531360  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.531387  200074 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:23.531462  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.569867  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.569936  200074 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:23.569960  200074 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 23:00:23.570103  200074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-825429 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:23.570200  200074 ssh_runner.go:195] Run: crio config
	I1008 23:00:23.663769  200074 cni.go:84] Creating CNI manager for ""
	I1008 23:00:23.663807  200074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:23.663827  200074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:23.663851  200074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825429 NodeName:embed-certs-825429 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:23.664032  200074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825429"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:23.664188  200074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:23.673332  200074 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:23.673424  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:23.682110  200074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1008 23:00:23.698014  200074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:23.714241  200074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1008 23:00:23.730391  200074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:23.734792  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.747684  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.928606  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:23.946415  200074 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429 for IP: 192.168.76.2
	I1008 23:00:23.946441  200074 certs.go:195] generating shared ca certs ...
	I1008 23:00:23.946461  200074 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:23.946635  200074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:23.946693  200074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:23.946706  200074 certs.go:257] generating profile certs ...
	I1008 23:00:23.946793  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key
	I1008 23:00:23.946881  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3
	I1008 23:00:23.946947  200074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key
	I1008 23:00:23.947094  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:23.947129  200074 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:23.947142  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:23.947170  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:23.947193  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:23.947224  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:23.947272  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:23.947891  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:23.971323  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:23.996302  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:24.027533  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:24.067397  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 23:00:24.113587  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 23:00:24.171396  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:24.233317  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:24.281842  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:24.312837  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:24.337367  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:24.364278  200074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:24.380163  200074 ssh_runner.go:195] Run: openssl version
	I1008 23:00:24.402171  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:24.411218  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420653  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420720  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.477008  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:24.486489  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:24.495742  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500273  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500338  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.545507  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:24.554243  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:24.568916  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573351  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573418  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.618186  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:24.629747  200074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:24.634953  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:24.681889  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:24.725355  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:24.834276  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:24.932960  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:25.074571  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:25.193985  200074 kubeadm.go:400] StartCluster: {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:25.194067  200074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:25.194141  200074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:25.269452  200074 cri.go:89] found id: "55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175"
	I1008 23:00:25.269472  200074 cri.go:89] found id: "22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061"
	I1008 23:00:25.269477  200074 cri.go:89] found id: "2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3"
	I1008 23:00:25.269481  200074 cri.go:89] found id: "a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0"
	I1008 23:00:25.269498  200074 cri.go:89] found id: ""
	I1008 23:00:25.269546  200074 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:25.281173  200074 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:25Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:25.281268  200074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:25.322177  200074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:25.322195  200074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:25.322243  200074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:25.362965  200074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:25.363367  200074 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-825429" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.363461  200074 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-825429" cluster setting kubeconfig missing "embed-certs-825429" context setting]
	I1008 23:00:25.363775  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.365003  200074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:25.380609  200074 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1008 23:00:25.380686  200074 kubeadm.go:601] duration metric: took 58.482086ms to restartPrimaryControlPlane
	I1008 23:00:25.380710  200074 kubeadm.go:402] duration metric: took 186.742153ms to StartCluster
	I1008 23:00:25.380754  200074 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.380828  200074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.381889  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.382365  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:25.382428  200074 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:25.382473  200074 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:25.382797  200074 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825429"
	I1008 23:00:25.382821  200074 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-825429"
	W1008 23:00:25.382827  200074 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:25.382848  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.382884  200074 addons.go:69] Setting dashboard=true in profile "embed-certs-825429"
	I1008 23:00:25.382903  200074 addons.go:238] Setting addon dashboard=true in "embed-certs-825429"
	W1008 23:00:25.382909  200074 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:25.382947  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.383306  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383427  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383753  200074 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825429"
	I1008 23:00:25.383775  200074 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825429"
	I1008 23:00:25.384049  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.389699  200074 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:25.397744  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.427867  200074 addons.go:238] Setting addon default-storageclass=true in "embed-certs-825429"
	W1008 23:00:25.427894  200074 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:25.427918  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.428350  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.462323  200074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:25.462386  200074 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:25.465277  200074 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.465378  200074 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.465394  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:25.465457  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.468927  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:25.468950  200074 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:25.469011  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.506947  200074 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.506970  200074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:25.507029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.520333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.546607  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.556438  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:24.816796  200735 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:24.843704  200735 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:24.847692  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:24.861363  200735 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:24.861469  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:24.861518  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.910267  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.910349  200735 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:24.910448  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.962779  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.962801  200735 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:24.962808  200735 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1008 23:00:24.962923  200735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-779490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:24.962999  200735 ssh_runner.go:195] Run: crio config
	I1008 23:00:25.062075  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:25.062100  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:25.062118  200735 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:25.062149  200735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-779490 NodeName:default-k8s-diff-port-779490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:25.062285  200735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-779490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:25.062361  200735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:25.074284  200735 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:25.074371  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:25.088117  200735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1008 23:00:25.106557  200735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:25.129827  200735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1008 23:00:25.149881  200735 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:25.154629  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:25.168582  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.460517  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.501961  200735 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490 for IP: 192.168.85.2
	I1008 23:00:25.501997  200735 certs.go:195] generating shared ca certs ...
	I1008 23:00:25.502015  200735 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.502157  200735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:25.502198  200735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:25.502204  200735 certs.go:257] generating profile certs ...
	I1008 23:00:25.502286  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key
	I1008 23:00:25.502350  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765
	I1008 23:00:25.502386  200735 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key
	I1008 23:00:25.502503  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:25.502530  200735 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:25.502538  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:25.502563  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:25.502588  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:25.502609  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:25.502650  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:25.503267  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:25.592800  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:25.646744  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:25.708575  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:25.781282  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 23:00:25.818906  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 23:00:25.877017  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:25.917052  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:25.947665  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:25.998644  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:26.025504  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:26.067106  200735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:26.088824  200735 ssh_runner.go:195] Run: openssl version
	I1008 23:00:26.100299  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:26.113073  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120724  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120843  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.190335  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:26.198935  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:26.210820  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218162  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218283  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.346366  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:26.373203  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:26.389547  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402275  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402419  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.505353  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:26.520251  200735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:26.536115  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:26.692708  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:26.825179  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:26.994307  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:27.130884  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:27.230322  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:27.336269  200735 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:27.336415  200735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:27.336525  200735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:27.395074  200735 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:00:27.395140  200735 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:00:27.395160  200735 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:00:27.395184  200735 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:00:27.395221  200735 cri.go:89] found id: ""
	I1008 23:00:27.395308  200735 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:27.426213  200735 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:27Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:27.426366  200735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:27.451284  200735 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:27.451347  200735 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:27.451438  200735 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:27.470047  200735 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:27.470958  200735 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-779490" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.471537  200735 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-779490" cluster setting kubeconfig missing "default-k8s-diff-port-779490" context setting]
	I1008 23:00:27.472341  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.474373  200735 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:27.502661  200735 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 23:00:27.502691  200735 kubeadm.go:601] duration metric: took 51.324103ms to restartPrimaryControlPlane
	I1008 23:00:27.502701  200735 kubeadm.go:402] duration metric: took 166.440913ms to StartCluster
	I1008 23:00:27.502716  200735 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.502780  200735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.504255  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.504498  200735 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:27.504946  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:27.504993  200735 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:27.505173  200735 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505205  200735 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505273  200735 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:27.505309  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.505228  200735 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505496  200735 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505504  200735 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:27.505523  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.506138  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.505236  200735 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.506586  200735 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-779490"
	I1008 23:00:27.506810  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.507164  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.508033  200735 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:27.511128  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:27.571481  200735 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.571510  200735 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:27.571533  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.571937  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.577698  200735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:27.577791  200735 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:27.580753  200735 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.875806  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.933368  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:25.933388  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:25.967177  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.989730  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.995808  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:25.995886  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:26.064075  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:26.064158  200074 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:26.159420  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:26.159495  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:26.259916  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:26.260013  200074 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:26.366694  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:26.366756  200074 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:26.415309  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:26.415386  200074 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:26.450896  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:26.450973  200074 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:26.486667  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:26.486690  200074 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:26.525078  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:27.580864  200735 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:27.580880  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:27.580952  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.583763  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:27.583795  200735 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:27.583868  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.614715  200735 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:27.614741  200735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:27.614805  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.638478  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.657760  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.663405  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.965178  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:28.011190  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:28.042994  200735 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:28.104531  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:28.104603  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:28.169664  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:28.169736  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:28.180277  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:28.323258  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:28.323335  200735 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:28.459418  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:28.459558  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:28.517653  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:28.517677  200735 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:28.543581  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:28.543607  200735 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:28.568175  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:28.568200  200735 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:28.591552  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:28.591579  200735 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:28.624882  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:28.624907  200735 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:28.682187  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:36.554642  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.678801563s)
	I1008 23:00:36.554692  200074 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.587441579s)
	I1008 23:00:36.554723  200074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.555033  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.565226311s)
	I1008 23:00:36.555298  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.030193657s)
	I1008 23:00:36.558520  200074 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-825429 addons enable metrics-server
	
	I1008 23:00:36.588258  200074 node_ready.go:49] node "embed-certs-825429" is "Ready"
	I1008 23:00:36.588291  200074 node_ready.go:38] duration metric: took 33.550217ms for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.588304  200074 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:36.588362  200074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:36.604701  200074 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:35.825467  200735 node_ready.go:49] node "default-k8s-diff-port-779490" is "Ready"
	I1008 23:00:35.825499  200735 node_ready.go:38] duration metric: took 7.782419961s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:35.825513  200735 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:35.825575  200735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:38.105427  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.094147032s)
	I1008 23:00:38.105534  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.925184377s)
	I1008 23:00:38.105652  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.423327121s)
	I1008 23:00:38.105678  200735 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.280089174s)
	I1008 23:00:38.106178  200735 api_server.go:72] duration metric: took 10.601654805s to wait for apiserver process to appear ...
	I1008 23:00:38.106187  200735 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:38.106203  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.109033  200735 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-779490 addons enable metrics-server
	
	I1008 23:00:38.130970  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 23:00:38.131050  200735 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 23:00:38.161807  200735 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:36.607526  200074 addons.go:514] duration metric: took 11.225039641s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:36.616796  200074 api_server.go:72] duration metric: took 11.234244971s to wait for apiserver process to appear ...
	I1008 23:00:36.616820  200074 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:36.616839  200074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 23:00:36.626167  200074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 23:00:36.627242  200074 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:36.627269  200074 api_server.go:131] duration metric: took 10.441367ms to wait for apiserver health ...
	I1008 23:00:36.627278  200074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:36.631675  200074 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:36.631714  200074 system_pods.go:61] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.631722  200074 system_pods.go:61] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.631729  200074 system_pods.go:61] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.631735  200074 system_pods.go:61] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.631742  200074 system_pods.go:61] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.631750  200074 system_pods.go:61] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.631757  200074 system_pods.go:61] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.631768  200074 system_pods.go:61] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.631774  200074 system_pods.go:74] duration metric: took 4.489884ms to wait for pod list to return data ...
	I1008 23:00:36.631788  200074 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:36.634659  200074 default_sa.go:45] found service account: "default"
	I1008 23:00:36.634682  200074 default_sa.go:55] duration metric: took 2.887786ms for default service account to be created ...
	I1008 23:00:36.634693  200074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:36.638046  200074 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:36.638083  200074 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.638092  200074 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.638097  200074 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.638104  200074 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.638116  200074 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.638121  200074 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.638127  200074 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.638134  200074 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.638141  200074 system_pods.go:126] duration metric: took 3.443001ms to wait for k8s-apps to be running ...
	I1008 23:00:36.638155  200074 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:36.638211  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:36.653778  200074 system_svc.go:56] duration metric: took 15.614806ms WaitForService to wait for kubelet
	I1008 23:00:36.653803  200074 kubeadm.go:586] duration metric: took 11.271256497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:36.653821  200074 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:36.657347  200074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:36.657379  200074 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:36.657391  200074 node_conditions.go:105] duration metric: took 3.563849ms to run NodePressure ...
	I1008 23:00:36.657403  200074 start.go:241] waiting for startup goroutines ...
	I1008 23:00:36.657411  200074 start.go:246] waiting for cluster config update ...
	I1008 23:00:36.657423  200074 start.go:255] writing updated cluster config ...
	I1008 23:00:36.657783  200074 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:36.670223  200074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:36.682756  200074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:38.706369  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:00:38.164701  200735 addons.go:514] duration metric: took 10.659691491s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:38.607275  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.622438  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1008 23:00:38.624605  200735 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:38.624637  200735 api_server.go:131] duration metric: took 518.442986ms to wait for apiserver health ...
	I1008 23:00:38.624648  200735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:38.630538  200735 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:38.630582  200735 system_pods.go:61] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.630619  200735 system_pods.go:61] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.630633  200735 system_pods.go:61] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.630641  200735 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.630649  200735 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.630659  200735 system_pods.go:61] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.630668  200735 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.630688  200735 system_pods.go:61] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.630701  200735 system_pods.go:74] duration metric: took 6.047091ms to wait for pod list to return data ...
	I1008 23:00:38.630708  200735 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:38.636880  200735 default_sa.go:45] found service account: "default"
	I1008 23:00:38.636933  200735 default_sa.go:55] duration metric: took 6.183914ms for default service account to be created ...
	I1008 23:00:38.636950  200735 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:38.641529  200735 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:38.641561  200735 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.641570  200735 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.641575  200735 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.641672  200735 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.641691  200735 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.641703  200735 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.641710  200735 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.641719  200735 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.641727  200735 system_pods.go:126] duration metric: took 4.769699ms to wait for k8s-apps to be running ...
	I1008 23:00:38.641752  200735 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:38.641843  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:38.657309  200735 system_svc.go:56] duration metric: took 15.563712ms WaitForService to wait for kubelet
	I1008 23:00:38.657341  200735 kubeadm.go:586] duration metric: took 11.152818203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:38.657392  200735 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:38.660817  200735 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:38.660857  200735 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:38.660900  200735 node_conditions.go:105] duration metric: took 3.495048ms to run NodePressure ...
	I1008 23:00:38.660913  200735 start.go:241] waiting for startup goroutines ...
	I1008 23:00:38.660925  200735 start.go:246] waiting for cluster config update ...
	I1008 23:00:38.660937  200735 start.go:255] writing updated cluster config ...
	I1008 23:00:38.661285  200735 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:38.665450  200735 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:38.681495  200735 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:40.702946  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:41.192108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.194681  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:45.689665  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.188107  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:45.195152  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:47.694917  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:50.202214  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:47.201882  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:49.202683  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:51.246618  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:52.690218  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:55.188303  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:53.690293  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:56.191657  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:57.694147  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:00.215108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:58.688765  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:00.690867  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:02.690268  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:05.191132  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:03.190806  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:05.687338  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:07.691307  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	I1008 23:01:09.189198  200735 pod_ready.go:94] pod "coredns-66bc5c9577-9xx2v" is "Ready"
	I1008 23:01:09.189221  200735 pod_ready.go:86] duration metric: took 30.507687365s for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.193878  200735 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.198549  200735 pod_ready.go:94] pod "etcd-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.198580  200735 pod_ready.go:86] duration metric: took 4.672663ms for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.202726  200735 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.216341  200735 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.216428  200735 pod_ready.go:86] duration metric: took 13.672156ms for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.221298  200735 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.385313  200735 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.385345  200735 pod_ready.go:86] duration metric: took 164.020409ms for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.585312  200735 pod_ready.go:83] waiting for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.986012  200735 pod_ready.go:94] pod "kube-proxy-jrvxc" is "Ready"
	I1008 23:01:09.986041  200735 pod_ready.go:86] duration metric: took 400.698358ms for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.190147  200735 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587493  200735 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:10.587525  200735 pod_ready.go:86] duration metric: took 397.349388ms for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587538  200735 pod_ready.go:40] duration metric: took 31.922052481s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:10.662421  200735 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:10.665744  200735 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-779490" cluster and "default" namespace by default
	W1008 23:01:07.689010  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:09.689062  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:11.693197  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:01:12.189762  200074 pod_ready.go:94] pod "coredns-66bc5c9577-s7kcb" is "Ready"
	I1008 23:01:12.189792  200074 pod_ready.go:86] duration metric: took 35.506963864s for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.192723  200074 pod_ready.go:83] waiting for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.197407  200074 pod_ready.go:94] pod "etcd-embed-certs-825429" is "Ready"
	I1008 23:01:12.197430  200074 pod_ready.go:86] duration metric: took 4.678735ms for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.200027  200074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.204611  200074 pod_ready.go:94] pod "kube-apiserver-embed-certs-825429" is "Ready"
	I1008 23:01:12.204642  200074 pod_ready.go:86] duration metric: took 4.593655ms for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.206885  200074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.387130  200074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-825429" is "Ready"
	I1008 23:01:12.387178  200074 pod_ready.go:86] duration metric: took 180.247707ms for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.587705  200074 pod_ready.go:83] waiting for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.987048  200074 pod_ready.go:94] pod "kube-proxy-86wtc" is "Ready"
	I1008 23:01:12.987076  200074 pod_ready.go:86] duration metric: took 399.301634ms for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.187216  200074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587259  200074 pod_ready.go:94] pod "kube-scheduler-embed-certs-825429" is "Ready"
	I1008 23:01:13.587290  200074 pod_ready.go:86] duration metric: took 400.047489ms for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587304  200074 pod_ready.go:40] duration metric: took 36.916992323s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:13.655798  200074 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:13.659151  200074 out.go:179] * Done! kubectl is now configured to use "embed-certs-825429" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.855701066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.864371454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.865083965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.87996613Z" level=info msg="Created container 4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m/dashboard-metrics-scraper" id=5cbb8bc8-db54-4877-9529-4b83a6b610db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.881022957Z" level=info msg="Starting container: 4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395" id=f540051c-60db-4562-a130-85343f3d47c2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.884238008Z" level=info msg="Started container" PID=1662 containerID=4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m/dashboard-metrics-scraper id=f540051c-60db-4562-a130-85343f3d47c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f
	Oct 08 23:01:16 default-k8s-diff-port-779490 conmon[1660]: conmon 4851ac155c8ccb03c9a0 <ninfo>: container 1662 exited with status 1
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.228121364Z" level=info msg="Removing container: 8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac" id=41adadb3-7c2b-4aa6-9e84-72d1eaf4febe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.23818218Z" level=info msg="Error loading conmon cgroup of container 8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac: cgroup deleted" id=41adadb3-7c2b-4aa6-9e84-72d1eaf4febe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.242130369Z" level=info msg="Removed container 8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m/dashboard-metrics-scraper" id=41adadb3-7c2b-4aa6-9e84-72d1eaf4febe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.730443041Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.73499512Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.735031888Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.735057808Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.738999565Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.739033428Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.739057034Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.742230305Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.742285682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.742308706Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.745901223Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.745938606Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.745964575Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.749016826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.749047924Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	4851ac155c8cc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   2                   c400d705e85d7       dashboard-metrics-scraper-6ffb444bf9-kpl7m             kubernetes-dashboard
	f53fecc8b57f0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago       Running             storage-provisioner         2                   ecb47aea06a8c       storage-provisioner                                    kube-system
	278e35cc7fbcc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   42862bb0e247d       kubernetes-dashboard-855c9754f9-ppnz2                  kubernetes-dashboard
	e5d915946b8ea       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago       Running             kube-proxy                  1                   c85acbd1d9d68       kube-proxy-jrvxc                                       kube-system
	1944ceb47b7c9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago       Exited              storage-provisioner         1                   ecb47aea06a8c       storage-provisioner                                    kube-system
	06c0a442bfb2b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago       Running             busybox                     1                   83ce0add140c8       busybox                                                default
	4a200e7e0c4c7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   ffc1bea46902e       coredns-66bc5c9577-9xx2v                               kube-system
	8a7be09e8d335       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   6cd4882a59918       kindnet-9vmvl                                          kube-system
	0c79858102e85       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   22c87fc132425       kube-apiserver-default-k8s-diff-port-779490            kube-system
	b17976f27670a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   61e125965e607       kube-scheduler-default-k8s-diff-port-779490            kube-system
	a9d1c9861bc94       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   47997e6ad5742       kube-controller-manager-default-k8s-diff-port-779490   kube-system
	d4862acbb3253       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f4dd3fb2c37a5       etcd-default-k8s-diff-port-779490                      kube-system
	
	
	==> coredns [4a200e7e0c4c7fa3195d199b8f5e47922f16fe844523cd9c5eb8cb9c5b3a5f92] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60649 - 55716 "HINFO IN 4365178978083387005.6166260651569986081. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030115133s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-779490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-779490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=default-k8s-diff-port-779490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_59_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-779490
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 23:01:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-779490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0feade24457404cb032ff4236d61a10
	  System UUID:                c1cdfe18-651a-4f09-abda-0497a79b449c
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-9xx2v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-779490                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-9vmvl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-779490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-779490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-jrvxc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-779490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kpl7m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ppnz2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m20s                  node-controller  Node default-k8s-diff-port-779490 event: Registered Node default-k8s-diff-port-779490 in Controller
	  Normal   NodeReady                98s                    kubelet          Node default-k8s-diff-port-779490 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node default-k8s-diff-port-779490 event: Registered Node default-k8s-diff-port-779490 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:00] overlayfs: idmapped layers are currently not supported
	[  +1.568442] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9] <==
	{"level":"warn","ts":"2025-10-08T23:00:32.530107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.559024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.597259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.641165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.688652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.737969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.766638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.797985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.824340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.854813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.865958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.900881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.945886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.008178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.032541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.074946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.115933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.149004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.193948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.239175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.295061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.325114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.353017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.391700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.569530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:27 up  1:43,  0 user,  load average: 3.46, 2.47, 2.01
	Linux default-k8s-diff-port-779490 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a7be09e8d3357ea5b26e1774372d50014be3d5c01add4f9434273ec80f5272e] <==
	I1008 23:00:37.602162       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 23:00:37.602590       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 23:00:37.602749       1 main.go:148] setting mtu 1500 for CNI 
	I1008 23:00:37.602793       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 23:00:37.602838       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T23:00:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 23:00:37.730536       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 23:00:37.804453       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 23:00:37.804494       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 23:00:37.804631       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 23:01:07.730649       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 23:01:07.805280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 23:01:07.805280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 23:01:07.808776       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 23:01:09.506641       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 23:01:09.506674       1 metrics.go:72] Registering metrics
	I1008 23:01:09.506746       1 controller.go:711] "Syncing nftables rules"
	I1008 23:01:17.730117       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 23:01:17.730160       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d] <==
	I1008 23:00:35.987679       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 23:00:36.020937       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 23:00:36.020961       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 23:00:36.021048       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 23:00:36.021108       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 23:00:36.021128       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 23:00:36.021333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 23:00:36.046363       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1008 23:00:36.046449       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 23:00:36.046529       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 23:00:36.046568       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 23:00:36.057175       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:00:36.063521       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 23:00:36.148208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1008 23:00:36.178098       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 23:00:36.837751       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 23:00:37.497271       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 23:00:37.639470       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 23:00:37.746977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 23:00:37.783088       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 23:00:37.955732       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.40.244"}
	I1008 23:00:38.003242       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.25.176"}
	I1008 23:00:40.699954       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 23:00:40.918410       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 23:00:41.032676       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec] <==
	I1008 23:00:40.519794       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1008 23:00:40.519919       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1008 23:00:40.519980       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1008 23:00:40.520010       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1008 23:00:40.520040       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1008 23:00:40.523393       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 23:00:40.528115       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 23:00:40.531221       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 23:00:40.532742       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1008 23:00:40.532979       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 23:00:40.533073       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 23:00:40.533182       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-779490"
	I1008 23:00:40.533249       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 23:00:40.533762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1008 23:00:40.533777       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 23:00:40.533829       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1008 23:00:40.534323       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 23:00:40.535993       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 23:00:40.536064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1008 23:00:40.536145       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1008 23:00:40.539041       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 23:00:40.551867       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 23:00:40.566701       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:00:40.566785       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 23:00:40.566818       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [e5d915946b8ea944e37566f7106abac224ef11871f731d856aaf37c2bac231dd] <==
	I1008 23:00:38.098615       1 server_linux.go:53] "Using iptables proxy"
	I1008 23:00:38.495486       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 23:00:38.603977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 23:00:38.604104       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 23:00:38.604261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 23:00:38.696390       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 23:00:38.696572       1 server_linux.go:132] "Using iptables Proxier"
	I1008 23:00:38.702280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 23:00:38.702998       1 server.go:527] "Version info" version="v1.34.1"
	I1008 23:00:38.703069       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:38.706920       1 config.go:106] "Starting endpoint slice config controller"
	I1008 23:00:38.707006       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 23:00:38.707321       1 config.go:200] "Starting service config controller"
	I1008 23:00:38.707328       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 23:00:38.707769       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 23:00:38.709713       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 23:00:38.710354       1 config.go:309] "Starting node config controller"
	I1008 23:00:38.710368       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 23:00:38.710375       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 23:00:38.807739       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 23:00:38.807876       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 23:00:38.810573       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a] <==
	I1008 23:00:32.699761       1 serving.go:386] Generated self-signed cert in-memory
	I1008 23:00:38.702092       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 23:00:38.702125       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:38.725990       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 23:00:38.726167       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 23:00:38.726316       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 23:00:38.726385       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 23:00:38.727346       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:38.727407       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:38.728865       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:00:38.733621       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:00:38.826977       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 23:00:38.834406       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:00:38.834529       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.071868     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l62hn\" (UniqueName: \"kubernetes.io/projected/4ce6d110-8ead-4b00-9c1c-115488a858ef-kube-api-access-l62hn\") pod \"kubernetes-dashboard-855c9754f9-ppnz2\" (UID: \"4ce6d110-8ead-4b00-9c1c-115488a858ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ppnz2"
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.072706     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ce6d110-8ead-4b00-9c1c-115488a858ef-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ppnz2\" (UID: \"4ce6d110-8ead-4b00-9c1c-115488a858ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ppnz2"
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.072847     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrbr\" (UniqueName: \"kubernetes.io/projected/7d132887-585f-4867-8b5e-8abd1e950fe7-kube-api-access-nbrbr\") pod \"dashboard-metrics-scraper-6ffb444bf9-kpl7m\" (UID: \"7d132887-585f-4867-8b5e-8abd1e950fe7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m"
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.072940     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7d132887-585f-4867-8b5e-8abd1e950fe7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kpl7m\" (UID: \"7d132887-585f-4867-8b5e-8abd1e950fe7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m"
	Oct 08 23:00:42 default-k8s-diff-port-779490 kubelet[777]: W1008 23:00:42.505164     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/crio-42862bb0e247d76eba8244a61dc0a86c7b315762368e28d3edf595d0051efca9 WatchSource:0}: Error finding container 42862bb0e247d76eba8244a61dc0a86c7b315762368e28d3edf595d0051efca9: Status 404 returned error can't find the container with id 42862bb0e247d76eba8244a61dc0a86c7b315762368e28d3edf595d0051efca9
	Oct 08 23:00:42 default-k8s-diff-port-779490 kubelet[777]: W1008 23:00:42.536980     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/crio-c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f WatchSource:0}: Error finding container c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f: Status 404 returned error can't find the container with id c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f
	Oct 08 23:00:56 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:56.150627     777 scope.go:117] "RemoveContainer" containerID="466a1ce652eb5d5063ab5732bd7c585249d47129a71aa0d4d4b3cfcfabf42486"
	Oct 08 23:00:56 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:56.169122     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ppnz2" podStartSLOduration=8.48827871 podStartE2EDuration="16.168368243s" podCreationTimestamp="2025-10-08 23:00:40 +0000 UTC" firstStartedPulling="2025-10-08 23:00:42.515958517 +0000 UTC m=+17.026586303" lastFinishedPulling="2025-10-08 23:00:50.19604805 +0000 UTC m=+24.706675836" observedRunningTime="2025-10-08 23:00:51.156060798 +0000 UTC m=+25.666688584" watchObservedRunningTime="2025-10-08 23:00:56.168368243 +0000 UTC m=+30.678996029"
	Oct 08 23:00:57 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:57.155042     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:00:57 default-k8s-diff-port-779490 kubelet[777]: E1008 23:00:57.155205     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:00:57 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:57.156389     777 scope.go:117] "RemoveContainer" containerID="466a1ce652eb5d5063ab5732bd7c585249d47129a71aa0d4d4b3cfcfabf42486"
	Oct 08 23:00:58 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:58.160424     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:00:58 default-k8s-diff-port-779490 kubelet[777]: E1008 23:00:58.160597     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:02 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:02.456543     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:01:02 default-k8s-diff-port-779490 kubelet[777]: E1008 23:01:02.456730     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:08 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:08.199270     777 scope.go:117] "RemoveContainer" containerID="1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565"
	Oct 08 23:01:16 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:16.852305     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:01:17 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:17.225824     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:01:17 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:17.226136     777 scope.go:117] "RemoveContainer" containerID="4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	Oct 08 23:01:17 default-k8s-diff-port-779490 kubelet[777]: E1008 23:01:17.226307     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:22 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:22.456994     777 scope.go:117] "RemoveContainer" containerID="4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	Oct 08 23:01:22 default-k8s-diff-port-779490 kubelet[777]: E1008 23:01:22.457182     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:23 default-k8s-diff-port-779490 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 23:01:24 default-k8s-diff-port-779490 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 23:01:24 default-k8s-diff-port-779490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [278e35cc7fbccaf5c63b64c560388a6a30f3774aced449276cff7421f19bcdfb] <==
	2025/10/08 23:00:50 Using namespace: kubernetes-dashboard
	2025/10/08 23:00:50 Using in-cluster config to connect to apiserver
	2025/10/08 23:00:50 Using secret token for csrf signing
	2025/10/08 23:00:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 23:00:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 23:00:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/08 23:00:50 Generating JWE encryption key
	2025/10/08 23:00:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 23:00:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 23:00:50 Initializing JWE encryption key from synchronized object
	2025/10/08 23:00:50 Creating in-cluster Sidecar client
	2025/10/08 23:00:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:00:50 Serving insecurely on HTTP port: 9090
	2025/10/08 23:01:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:00:50 Starting overwatch
	
	
	==> storage-provisioner [1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565] <==
	I1008 23:00:37.582157       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 23:01:07.584034       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f53fecc8b57f03ffafccaf27e308d0f2475f20d0a79b800e28025b87e8e9f33d] <==
	I1008 23:01:08.258751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 23:01:08.274338       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 23:01:08.274396       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 23:01:08.277552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:11.732149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:15.992278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:19.595647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:22.648707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.671524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.679105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:25.679254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 23:01:25.679480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-779490_13b9a621-9c10-4be3-a2c2-77a9e596501a!
	I1008 23:01:25.682213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1621d9ca-2fb2-43ad-b54a-b562c4b49118", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-779490_13b9a621-9c10-4be3-a2c2-77a9e596501a became leader
	W1008 23:01:25.683610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.690671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:25.781770       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-779490_13b9a621-9c10-4be3-a2c2-77a9e596501a!
	W1008 23:01:27.694116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:27.707408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490: exit status 2 (587.599373ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-779490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-779490
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-779490:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca",
	        "Created": "2025-10-08T22:58:32.369538297Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T23:00:17.439390742Z",
	            "FinishedAt": "2025-10-08T23:00:16.44928857Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/hosts",
	        "LogPath": "/var/lib/docker/containers/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca-json.log",
	        "Name": "/default-k8s-diff-port-779490",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-779490:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-779490",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca",
	                "LowerDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c1ebd2297c310800cd0e001597c3584e544a5202dde1ae125736aeeaeccf3f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-779490",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-779490/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-779490",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-779490",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-779490",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62aa5cf728b093259a42ddeecdf7f43b5829a55eccf96dbfca4179b5d0f8f50a",
	            "SandboxKey": "/var/run/docker/netns/62aa5cf728b0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-779490": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:0c:9a:73:4b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95a85807530ab25d32d1815ee29ce3fc904bd88d88973d6a88e562431efd0d87",
	                    "EndpointID": "b6f44dc4c2ca4f239fa8f920ac2d819b01a2db9731989602123c4a8ea7a4610f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-779490",
	                        "74faf5bf01ef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490: exit status 2 (416.080278ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-779490 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-779490 logs -n 25: (1.755277743s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                               │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                             │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-036919                                                                                                                                          │ disable-driver-mounts-036919 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	│ stop    │ -p embed-certs-825429 --alsologtostderr -v=3                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ stop    │ -p default-k8s-diff-port-779490 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-779490 image list --format=json                                                                                                                    │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-779490 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ image   │ embed-certs-825429 image list --format=json                                                                                                                              │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p embed-certs-825429 --alsologtostderr -v=1                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 23:00:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 23:00:17.163938  200735 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:00:17.164058  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164070  200735 out.go:374] Setting ErrFile to fd 2...
	I1008 23:00:17.164076  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164320  200735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:00:17.164684  200735 out.go:368] Setting JSON to false
	I1008 23:00:17.165518  200735 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6168,"bootTime":1759958250,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 23:00:17.165584  200735 start.go:141] virtualization:  
	I1008 23:00:17.170349  200735 out.go:179] * [default-k8s-diff-port-779490] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 23:00:17.173550  200735 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 23:00:17.173606  200735 notify.go:220] Checking for updates...
	I1008 23:00:17.179549  200735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 23:00:17.182394  200735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:17.185318  200735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 23:00:17.188242  200735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 23:00:17.191227  200735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 23:00:17.194561  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:17.195186  200735 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 23:00:17.221784  200735 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 23:00:17.221965  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.290959  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.282099792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.291074  200735 docker.go:318] overlay module found
	I1008 23:00:17.294262  200735 out.go:179] * Using the docker driver based on existing profile
	I1008 23:00:17.297119  200735 start.go:305] selected driver: docker
	I1008 23:00:17.297140  200735 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.297251  200735 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 23:00:17.298023  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.356048  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.346390453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.356372  200735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:17.356415  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:17.356471  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:17.356518  200735 start.go:349] cluster config:
	{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.359826  200735 out.go:179] * Starting "default-k8s-diff-port-779490" primary control-plane node in "default-k8s-diff-port-779490" cluster
	I1008 23:00:17.362672  200735 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 23:00:17.365466  200735 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 23:00:17.368335  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:17.368364  200735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 23:00:17.368384  200735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 23:00:17.368391  200735 cache.go:58] Caching tarball of preloaded images
	I1008 23:00:17.368477  200735 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 23:00:17.368487  200735 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 23:00:17.368593  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.387741  200735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 23:00:17.387766  200735 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 23:00:17.387788  200735 cache.go:232] Successfully downloaded all kic artifacts
	I1008 23:00:17.387813  200735 start.go:360] acquireMachinesLock for default-k8s-diff-port-779490: {Name:mkf9138008d7ef2884518c448a03b33b088d9068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 23:00:17.387870  200735 start.go:364] duration metric: took 34.314µs to acquireMachinesLock for "default-k8s-diff-port-779490"
	I1008 23:00:17.387894  200735 start.go:96] Skipping create...Using existing machine configuration
	I1008 23:00:17.387906  200735 fix.go:54] fixHost starting: 
	I1008 23:00:17.388165  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.405667  200735 fix.go:112] recreateIfNeeded on default-k8s-diff-port-779490: state=Stopped err=<nil>
	W1008 23:00:17.405698  200735 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 23:00:16.057868  200074 out.go:252] * Restarting existing docker container for "embed-certs-825429" ...
	I1008 23:00:16.057965  200074 cli_runner.go:164] Run: docker start embed-certs-825429
	I1008 23:00:16.315950  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:16.335815  200074 kic.go:430] container "embed-certs-825429" state is running.
	I1008 23:00:16.336208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:16.356036  200074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/config.json ...
	I1008 23:00:16.356262  200074 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:16.356315  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:16.378830  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:16.379148  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:16.379157  200074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:16.380409  200074 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59024->127.0.0.1:33081: read: connection reset by peer
	I1008 23:00:19.529381  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.529407  200074 ubuntu.go:182] provisioning hostname "embed-certs-825429"
	I1008 23:00:19.529470  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.548688  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.549089  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.549126  200074 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825429 && echo "embed-certs-825429" | sudo tee /etc/hostname
	I1008 23:00:19.704942  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.705029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.723786  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.724093  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.724110  200074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825429' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825429/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825429' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:19.870310  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:19.870379  200074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:19.870406  200074 ubuntu.go:190] setting up certificates
	I1008 23:00:19.870417  200074 provision.go:84] configureAuth start
	I1008 23:00:19.870501  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:19.888221  200074 provision.go:143] copyHostCerts
	I1008 23:00:19.888292  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:19.888316  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:19.888394  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:19.888499  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:19.888508  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:19.888537  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:19.888603  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:19.888615  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:19.888643  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:19.888697  200074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825429 san=[127.0.0.1 192.168.76.2 embed-certs-825429 localhost minikube]
	I1008 23:00:17.408820  200735 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-779490" ...
	I1008 23:00:17.408898  200735 cli_runner.go:164] Run: docker start default-k8s-diff-port-779490
	I1008 23:00:17.666806  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.691387  200735 kic.go:430] container "default-k8s-diff-port-779490" state is running.
	I1008 23:00:17.691764  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:17.715368  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.715595  200735 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:17.715865  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:17.740298  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:17.740619  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:17.740636  200735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:17.741357  200735 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 23:00:20.909388  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:20.909415  200735 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-779490"
	I1008 23:00:20.909477  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:20.926770  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:20.927074  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:20.927096  200735 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-779490 && echo "default-k8s-diff-port-779490" | sudo tee /etc/hostname
	I1008 23:00:21.093286  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:21.093383  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:21.122816  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.123125  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:21.123144  200735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-779490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-779490/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-779490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:21.274338  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:21.274367  200735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:21.274399  200735 ubuntu.go:190] setting up certificates
	I1008 23:00:21.274412  200735 provision.go:84] configureAuth start
	I1008 23:00:21.274479  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:21.301901  200735 provision.go:143] copyHostCerts
	I1008 23:00:21.301972  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:21.301995  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:21.302061  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:21.302175  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:21.302187  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:21.302212  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:21.302280  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:21.302297  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:21.302320  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:21.302377  200735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-779490 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-779490 localhost minikube]
	I1008 23:00:22.045829  200735 provision.go:177] copyRemoteCerts
	I1008 23:00:22.045958  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:22.046043  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.065464  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:20.814951  200074 provision.go:177] copyRemoteCerts
	I1008 23:00:20.815017  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:20.815059  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:20.834587  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:20.947002  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:20.966672  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 23:00:20.987841  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:00:21.017825  200074 provision.go:87] duration metric: took 1.147384041s to configureAuth
	I1008 23:00:21.017855  200074 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:21.018073  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:21.018178  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.038971  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.039282  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:21.039304  200074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:21.410917  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:21.410937  200074 machine.go:96] duration metric: took 5.054666132s to provisionDockerMachine
	I1008 23:00:21.410948  200074 start.go:293] postStartSetup for "embed-certs-825429" (driver="docker")
	I1008 23:00:21.410958  200074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:21.411025  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:21.411063  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.439350  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.543094  200074 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:21.547406  200074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:21.547435  200074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:21.547450  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:21.547507  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:21.547597  200074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:21.547700  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:21.556609  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:21.585243  200074 start.go:296] duration metric: took 174.278532ms for postStartSetup
	I1008 23:00:21.585334  200074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:21.585378  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.621333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.735318  200074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:21.743106  200074 fix.go:56] duration metric: took 5.706738194s for fixHost
	I1008 23:00:21.743134  200074 start.go:83] releasing machines lock for "embed-certs-825429", held for 5.70679646s
	I1008 23:00:21.743208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:21.767422  200074 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:21.767474  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.767704  200074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:21.767778  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.807518  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.808257  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:22.023792  200074 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:22.032065  200074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:22.086835  200074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:22.095791  200074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:22.095870  200074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:22.106263  200074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:22.106289  200074 start.go:495] detecting cgroup driver to use...
	I1008 23:00:22.106323  200074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:22.106377  200074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:22.126344  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:22.142497  200074 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:22.142563  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:22.158960  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:22.174798  200074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:22.323493  200074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:22.466670  200074 docker.go:234] disabling docker service ...
	I1008 23:00:22.466740  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:22.483900  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:22.498887  200074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:22.646149  200074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:22.804808  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:22.821564  200074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:22.839222  200074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:22.839285  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.851109  200074 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:22.851182  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.863916  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.878286  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.887691  200074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:22.897074  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.909548  200074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.919602  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.930018  200074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:22.938657  200074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:22.946980  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.134756  200074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:23.291036  200074 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:23.291115  200074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:23.295899  200074 start.go:563] Will wait 60s for crictl version
	I1008 23:00:23.295972  200074 ssh_runner.go:195] Run: which crictl
	I1008 23:00:23.300513  200074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:23.339721  200074 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:23.339809  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.382887  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.427225  200074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:22.179705  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:22.201073  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 23:00:22.231111  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 23:00:22.265814  200735 provision.go:87] duration metric: took 991.378792ms to configureAuth
	I1008 23:00:22.265882  200735 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:22.266132  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:22.266293  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.285804  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:22.286122  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:22.286137  200735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:22.656376  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:22.656462  200735 machine.go:96] duration metric: took 4.940857891s to provisionDockerMachine
	I1008 23:00:22.656490  200735 start.go:293] postStartSetup for "default-k8s-diff-port-779490" (driver="docker")
	I1008 23:00:22.656532  200735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:22.656635  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:22.656703  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.681602  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.795033  200735 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:22.799606  200735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:22.799632  200735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:22.799644  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:22.799704  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:22.799788  200735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:22.799891  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:22.809604  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:22.832880  200735 start.go:296] duration metric: took 176.344915ms for postStartSetup
	I1008 23:00:22.833082  200735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:22.833170  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.857779  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.964061  200735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:22.969468  200735 fix.go:56] duration metric: took 5.581560799s for fixHost
	I1008 23:00:22.969491  200735 start.go:83] releasing machines lock for "default-k8s-diff-port-779490", held for 5.581607766s
	I1008 23:00:22.969557  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:22.988681  200735 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:22.988742  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.988958  200735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:22.989020  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:23.026248  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.043081  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.248291  200735 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:23.255759  200735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:23.326213  200735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:23.335019  200735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:23.335098  200735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:23.344495  200735 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:23.344539  200735 start.go:495] detecting cgroup driver to use...
	I1008 23:00:23.344575  200735 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:23.344639  200735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:23.367326  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:23.380944  200735 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:23.381008  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:23.398756  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:23.412634  200735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:23.559101  200735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:23.743425  200735 docker.go:234] disabling docker service ...
	I1008 23:00:23.743510  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:23.767092  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:23.784102  200735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:23.992289  200735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:24.197499  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:24.213564  200735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:24.241135  200735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:24.241200  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.259960  200735 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:24.260094  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.270690  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.284851  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.296200  200735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:24.304654  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.313931  200735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.322480  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.333103  200735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:24.342318  200735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:24.350381  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:24.494463  200735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:24.666167  200735 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:24.666337  200735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:24.670699  200735 start.go:563] Will wait 60s for crictl version
	I1008 23:00:24.670769  200735 ssh_runner.go:195] Run: which crictl
	I1008 23:00:24.674726  200735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:24.721851  200735 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:24.721939  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.775722  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.813408  200735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:23.430030  200074 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:23.456528  200074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:23.460989  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.482225  200074 kubeadm.go:883] updating cluster {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:23.482358  200074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:23.482421  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.531360  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.531387  200074 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:23.531462  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.569867  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.569936  200074 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:23.569960  200074 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 23:00:23.570103  200074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-825429 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:23.570200  200074 ssh_runner.go:195] Run: crio config
	I1008 23:00:23.663769  200074 cni.go:84] Creating CNI manager for ""
	I1008 23:00:23.663807  200074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:23.663827  200074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:23.663851  200074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825429 NodeName:embed-certs-825429 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:23.664032  200074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825429"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:23.664188  200074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:23.673332  200074 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:23.673424  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:23.682110  200074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1008 23:00:23.698014  200074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:23.714241  200074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1008 23:00:23.730391  200074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:23.734792  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.747684  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.928606  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:23.946415  200074 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429 for IP: 192.168.76.2
	I1008 23:00:23.946441  200074 certs.go:195] generating shared ca certs ...
	I1008 23:00:23.946461  200074 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:23.946635  200074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:23.946693  200074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:23.946706  200074 certs.go:257] generating profile certs ...
	I1008 23:00:23.946793  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key
	I1008 23:00:23.946881  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3
	I1008 23:00:23.946947  200074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key
	I1008 23:00:23.947094  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:23.947129  200074 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:23.947142  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:23.947170  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:23.947193  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:23.947224  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:23.947272  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:23.947891  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:23.971323  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:23.996302  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:24.027533  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:24.067397  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 23:00:24.113587  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 23:00:24.171396  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:24.233317  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:24.281842  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:24.312837  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:24.337367  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:24.364278  200074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:24.380163  200074 ssh_runner.go:195] Run: openssl version
	I1008 23:00:24.402171  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:24.411218  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420653  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420720  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.477008  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:24.486489  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:24.495742  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500273  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500338  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.545507  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:24.554243  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:24.568916  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573351  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573418  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.618186  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:24.629747  200074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:24.634953  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:24.681889  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:24.725355  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:24.834276  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:24.932960  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:25.074571  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:25.193985  200074 kubeadm.go:400] StartCluster: {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:25.194067  200074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:25.194141  200074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:25.269452  200074 cri.go:89] found id: "55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175"
	I1008 23:00:25.269472  200074 cri.go:89] found id: "22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061"
	I1008 23:00:25.269477  200074 cri.go:89] found id: "2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3"
	I1008 23:00:25.269481  200074 cri.go:89] found id: "a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0"
	I1008 23:00:25.269498  200074 cri.go:89] found id: ""
	I1008 23:00:25.269546  200074 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:25.281173  200074 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:25Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:25.281268  200074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:25.322177  200074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:25.322195  200074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:25.322243  200074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:25.362965  200074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:25.363367  200074 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-825429" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.363461  200074 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-825429" cluster setting kubeconfig missing "embed-certs-825429" context setting]
	I1008 23:00:25.363775  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.365003  200074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:25.380609  200074 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1008 23:00:25.380686  200074 kubeadm.go:601] duration metric: took 58.482086ms to restartPrimaryControlPlane
	I1008 23:00:25.380710  200074 kubeadm.go:402] duration metric: took 186.742153ms to StartCluster
	I1008 23:00:25.380754  200074 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.380828  200074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.381889  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.382365  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:25.382428  200074 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:25.382473  200074 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:25.382797  200074 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825429"
	I1008 23:00:25.382821  200074 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-825429"
	W1008 23:00:25.382827  200074 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:25.382848  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.382884  200074 addons.go:69] Setting dashboard=true in profile "embed-certs-825429"
	I1008 23:00:25.382903  200074 addons.go:238] Setting addon dashboard=true in "embed-certs-825429"
	W1008 23:00:25.382909  200074 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:25.382947  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.383306  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383427  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383753  200074 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825429"
	I1008 23:00:25.383775  200074 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825429"
	I1008 23:00:25.384049  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.389699  200074 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:25.397744  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.427867  200074 addons.go:238] Setting addon default-storageclass=true in "embed-certs-825429"
	W1008 23:00:25.427894  200074 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:25.427918  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.428350  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.462323  200074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:25.462386  200074 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:25.465277  200074 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.465378  200074 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.465394  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:25.465457  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.468927  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:25.468950  200074 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:25.469011  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.506947  200074 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.506970  200074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:25.507029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.520333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.546607  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.556438  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:24.816796  200735 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:24.843704  200735 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:24.847692  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:24.861363  200735 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:24.861469  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:24.861518  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.910267  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.910349  200735 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:24.910448  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.962779  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.962801  200735 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:24.962808  200735 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1008 23:00:24.962923  200735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-779490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:24.962999  200735 ssh_runner.go:195] Run: crio config
	I1008 23:00:25.062075  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:25.062100  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:25.062118  200735 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:25.062149  200735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-779490 NodeName:default-k8s-diff-port-779490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:25.062285  200735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-779490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:25.062361  200735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:25.074284  200735 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:25.074371  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:25.088117  200735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1008 23:00:25.106557  200735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:25.129827  200735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1008 23:00:25.149881  200735 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:25.154629  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:25.168582  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.460517  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.501961  200735 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490 for IP: 192.168.85.2
	I1008 23:00:25.501997  200735 certs.go:195] generating shared ca certs ...
	I1008 23:00:25.502015  200735 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.502157  200735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:25.502198  200735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:25.502204  200735 certs.go:257] generating profile certs ...
	I1008 23:00:25.502286  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key
	I1008 23:00:25.502350  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765
	I1008 23:00:25.502386  200735 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key
	I1008 23:00:25.502503  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:25.502530  200735 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:25.502538  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:25.502563  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:25.502588  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:25.502609  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:25.502650  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:25.503267  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:25.592800  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:25.646744  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:25.708575  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:25.781282  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 23:00:25.818906  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 23:00:25.877017  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:25.917052  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:25.947665  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:25.998644  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:26.025504  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:26.067106  200735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:26.088824  200735 ssh_runner.go:195] Run: openssl version
	I1008 23:00:26.100299  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:26.113073  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120724  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120843  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.190335  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:26.198935  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:26.210820  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218162  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218283  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.346366  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:26.373203  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:26.389547  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402275  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402419  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.505353  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:26.520251  200735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:26.536115  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:26.692708  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:26.825179  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:26.994307  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:27.130884  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:27.230322  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:27.336269  200735 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:27.336415  200735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:27.336525  200735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:27.395074  200735 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:00:27.395140  200735 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:00:27.395160  200735 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:00:27.395184  200735 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:00:27.395221  200735 cri.go:89] found id: ""
	I1008 23:00:27.395308  200735 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:27.426213  200735 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:27Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:27.426366  200735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:27.451284  200735 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:27.451347  200735 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:27.451438  200735 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:27.470047  200735 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:27.470958  200735 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-779490" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.471537  200735 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-779490" cluster setting kubeconfig missing "default-k8s-diff-port-779490" context setting]
	I1008 23:00:27.472341  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.474373  200735 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:27.502661  200735 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 23:00:27.502691  200735 kubeadm.go:601] duration metric: took 51.324103ms to restartPrimaryControlPlane
	I1008 23:00:27.502701  200735 kubeadm.go:402] duration metric: took 166.440913ms to StartCluster
	I1008 23:00:27.502716  200735 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.502780  200735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.504255  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.504498  200735 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:27.504946  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:27.504993  200735 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:27.505173  200735 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505205  200735 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505273  200735 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:27.505309  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.505228  200735 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505496  200735 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505504  200735 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:27.505523  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.506138  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.505236  200735 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.506586  200735 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-779490"
	I1008 23:00:27.506810  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.507164  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.508033  200735 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:27.511128  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:27.571481  200735 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.571510  200735 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:27.571533  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.571937  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.577698  200735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:27.577791  200735 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:27.580753  200735 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.875806  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.933368  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:25.933388  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:25.967177  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.989730  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.995808  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:25.995886  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:26.064075  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:26.064158  200074 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:26.159420  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:26.159495  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:26.259916  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:26.260013  200074 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:26.366694  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:26.366756  200074 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:26.415309  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:26.415386  200074 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:26.450896  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:26.450973  200074 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:26.486667  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:26.486690  200074 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:26.525078  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:27.580864  200735 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:27.580880  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:27.580952  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.583763  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:27.583795  200735 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:27.583868  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.614715  200735 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:27.614741  200735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:27.614805  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.638478  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.657760  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.663405  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.965178  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:28.011190  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:28.042994  200735 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:28.104531  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:28.104603  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:28.169664  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:28.169736  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:28.180277  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:28.323258  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:28.323335  200735 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:28.459418  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:28.459558  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:28.517653  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:28.517677  200735 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:28.543581  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:28.543607  200735 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:28.568175  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:28.568200  200735 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:28.591552  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:28.591579  200735 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:28.624882  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:28.624907  200735 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:28.682187  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:36.554642  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.678801563s)
	I1008 23:00:36.554692  200074 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.587441579s)
	I1008 23:00:36.554723  200074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.555033  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.565226311s)
	I1008 23:00:36.555298  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.030193657s)
	I1008 23:00:36.558520  200074 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-825429 addons enable metrics-server
	
	I1008 23:00:36.588258  200074 node_ready.go:49] node "embed-certs-825429" is "Ready"
	I1008 23:00:36.588291  200074 node_ready.go:38] duration metric: took 33.550217ms for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.588304  200074 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:36.588362  200074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:36.604701  200074 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:35.825467  200735 node_ready.go:49] node "default-k8s-diff-port-779490" is "Ready"
	I1008 23:00:35.825499  200735 node_ready.go:38] duration metric: took 7.782419961s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:35.825513  200735 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:35.825575  200735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:38.105427  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.094147032s)
	I1008 23:00:38.105534  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.925184377s)
	I1008 23:00:38.105652  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.423327121s)
	I1008 23:00:38.105678  200735 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.280089174s)
	I1008 23:00:38.106178  200735 api_server.go:72] duration metric: took 10.601654805s to wait for apiserver process to appear ...
	I1008 23:00:38.106187  200735 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:38.106203  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.109033  200735 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-779490 addons enable metrics-server
	
	I1008 23:00:38.130970  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 23:00:38.131050  200735 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 23:00:38.161807  200735 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:36.607526  200074 addons.go:514] duration metric: took 11.225039641s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:36.616796  200074 api_server.go:72] duration metric: took 11.234244971s to wait for apiserver process to appear ...
	I1008 23:00:36.616820  200074 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:36.616839  200074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 23:00:36.626167  200074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 23:00:36.627242  200074 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:36.627269  200074 api_server.go:131] duration metric: took 10.441367ms to wait for apiserver health ...
	I1008 23:00:36.627278  200074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:36.631675  200074 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:36.631714  200074 system_pods.go:61] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.631722  200074 system_pods.go:61] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.631729  200074 system_pods.go:61] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.631735  200074 system_pods.go:61] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.631742  200074 system_pods.go:61] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.631750  200074 system_pods.go:61] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.631757  200074 system_pods.go:61] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.631768  200074 system_pods.go:61] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.631774  200074 system_pods.go:74] duration metric: took 4.489884ms to wait for pod list to return data ...
	I1008 23:00:36.631788  200074 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:36.634659  200074 default_sa.go:45] found service account: "default"
	I1008 23:00:36.634682  200074 default_sa.go:55] duration metric: took 2.887786ms for default service account to be created ...
	I1008 23:00:36.634693  200074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:36.638046  200074 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:36.638083  200074 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.638092  200074 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.638097  200074 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.638104  200074 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.638116  200074 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.638121  200074 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.638127  200074 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.638134  200074 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.638141  200074 system_pods.go:126] duration metric: took 3.443001ms to wait for k8s-apps to be running ...
	I1008 23:00:36.638155  200074 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:36.638211  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:36.653778  200074 system_svc.go:56] duration metric: took 15.614806ms WaitForService to wait for kubelet
	I1008 23:00:36.653803  200074 kubeadm.go:586] duration metric: took 11.271256497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:36.653821  200074 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:36.657347  200074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:36.657379  200074 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:36.657391  200074 node_conditions.go:105] duration metric: took 3.563849ms to run NodePressure ...
	I1008 23:00:36.657403  200074 start.go:241] waiting for startup goroutines ...
	I1008 23:00:36.657411  200074 start.go:246] waiting for cluster config update ...
	I1008 23:00:36.657423  200074 start.go:255] writing updated cluster config ...
	I1008 23:00:36.657783  200074 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:36.670223  200074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:36.682756  200074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:38.706369  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:00:38.164701  200735 addons.go:514] duration metric: took 10.659691491s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:38.607275  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.622438  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1008 23:00:38.624605  200735 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:38.624637  200735 api_server.go:131] duration metric: took 518.442986ms to wait for apiserver health ...
	I1008 23:00:38.624648  200735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:38.630538  200735 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:38.630582  200735 system_pods.go:61] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.630619  200735 system_pods.go:61] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.630633  200735 system_pods.go:61] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.630641  200735 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.630649  200735 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.630659  200735 system_pods.go:61] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.630668  200735 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.630688  200735 system_pods.go:61] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.630701  200735 system_pods.go:74] duration metric: took 6.047091ms to wait for pod list to return data ...
	I1008 23:00:38.630708  200735 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:38.636880  200735 default_sa.go:45] found service account: "default"
	I1008 23:00:38.636933  200735 default_sa.go:55] duration metric: took 6.183914ms for default service account to be created ...
	I1008 23:00:38.636950  200735 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:38.641529  200735 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:38.641561  200735 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.641570  200735 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.641575  200735 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.641672  200735 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.641691  200735 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.641703  200735 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.641710  200735 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.641719  200735 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.641727  200735 system_pods.go:126] duration metric: took 4.769699ms to wait for k8s-apps to be running ...
	I1008 23:00:38.641752  200735 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:38.641843  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:38.657309  200735 system_svc.go:56] duration metric: took 15.563712ms WaitForService to wait for kubelet
	I1008 23:00:38.657341  200735 kubeadm.go:586] duration metric: took 11.152818203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:38.657392  200735 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:38.660817  200735 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:38.660857  200735 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:38.660900  200735 node_conditions.go:105] duration metric: took 3.495048ms to run NodePressure ...
	I1008 23:00:38.660913  200735 start.go:241] waiting for startup goroutines ...
	I1008 23:00:38.660925  200735 start.go:246] waiting for cluster config update ...
	I1008 23:00:38.660937  200735 start.go:255] writing updated cluster config ...
	I1008 23:00:38.661285  200735 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:38.665450  200735 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:38.681495  200735 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:40.702946  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:41.192108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.194681  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:45.689665  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.188107  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:45.195152  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:47.694917  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:50.202214  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:47.201882  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:49.202683  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:51.246618  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:52.690218  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:55.188303  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:53.690293  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:56.191657  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:57.694147  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:00.215108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:58.688765  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:00.690867  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:02.690268  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:05.191132  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:03.190806  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:05.687338  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:07.691307  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	I1008 23:01:09.189198  200735 pod_ready.go:94] pod "coredns-66bc5c9577-9xx2v" is "Ready"
	I1008 23:01:09.189221  200735 pod_ready.go:86] duration metric: took 30.507687365s for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.193878  200735 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.198549  200735 pod_ready.go:94] pod "etcd-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.198580  200735 pod_ready.go:86] duration metric: took 4.672663ms for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.202726  200735 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.216341  200735 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.216428  200735 pod_ready.go:86] duration metric: took 13.672156ms for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.221298  200735 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.385313  200735 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.385345  200735 pod_ready.go:86] duration metric: took 164.020409ms for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.585312  200735 pod_ready.go:83] waiting for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.986012  200735 pod_ready.go:94] pod "kube-proxy-jrvxc" is "Ready"
	I1008 23:01:09.986041  200735 pod_ready.go:86] duration metric: took 400.698358ms for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.190147  200735 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587493  200735 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:10.587525  200735 pod_ready.go:86] duration metric: took 397.349388ms for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587538  200735 pod_ready.go:40] duration metric: took 31.922052481s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:10.662421  200735 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:10.665744  200735 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-779490" cluster and "default" namespace by default
	W1008 23:01:07.689010  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:09.689062  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:11.693197  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:01:12.189762  200074 pod_ready.go:94] pod "coredns-66bc5c9577-s7kcb" is "Ready"
	I1008 23:01:12.189792  200074 pod_ready.go:86] duration metric: took 35.506963864s for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.192723  200074 pod_ready.go:83] waiting for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.197407  200074 pod_ready.go:94] pod "etcd-embed-certs-825429" is "Ready"
	I1008 23:01:12.197430  200074 pod_ready.go:86] duration metric: took 4.678735ms for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.200027  200074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.204611  200074 pod_ready.go:94] pod "kube-apiserver-embed-certs-825429" is "Ready"
	I1008 23:01:12.204642  200074 pod_ready.go:86] duration metric: took 4.593655ms for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.206885  200074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.387130  200074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-825429" is "Ready"
	I1008 23:01:12.387178  200074 pod_ready.go:86] duration metric: took 180.247707ms for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.587705  200074 pod_ready.go:83] waiting for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.987048  200074 pod_ready.go:94] pod "kube-proxy-86wtc" is "Ready"
	I1008 23:01:12.987076  200074 pod_ready.go:86] duration metric: took 399.301634ms for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.187216  200074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587259  200074 pod_ready.go:94] pod "kube-scheduler-embed-certs-825429" is "Ready"
	I1008 23:01:13.587290  200074 pod_ready.go:86] duration metric: took 400.047489ms for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587304  200074 pod_ready.go:40] duration metric: took 36.916992323s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:13.655798  200074 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:13.659151  200074 out.go:179] * Done! kubectl is now configured to use "embed-certs-825429" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.855701066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.864371454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.865083965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.87996613Z" level=info msg="Created container 4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m/dashboard-metrics-scraper" id=5cbb8bc8-db54-4877-9529-4b83a6b610db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.881022957Z" level=info msg="Starting container: 4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395" id=f540051c-60db-4562-a130-85343f3d47c2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:01:16 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:16.884238008Z" level=info msg="Started container" PID=1662 containerID=4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m/dashboard-metrics-scraper id=f540051c-60db-4562-a130-85343f3d47c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f
	Oct 08 23:01:16 default-k8s-diff-port-779490 conmon[1660]: conmon 4851ac155c8ccb03c9a0 <ninfo>: container 1662 exited with status 1
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.228121364Z" level=info msg="Removing container: 8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac" id=41adadb3-7c2b-4aa6-9e84-72d1eaf4febe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.23818218Z" level=info msg="Error loading conmon cgroup of container 8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac: cgroup deleted" id=41adadb3-7c2b-4aa6-9e84-72d1eaf4febe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.242130369Z" level=info msg="Removed container 8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m/dashboard-metrics-scraper" id=41adadb3-7c2b-4aa6-9e84-72d1eaf4febe name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.730443041Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.73499512Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.735031888Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.735057808Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.738999565Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.739033428Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.739057034Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.742230305Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.742285682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.742308706Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.745901223Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.745938606Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.745964575Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.749016826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:17 default-k8s-diff-port-779490 crio[649]: time="2025-10-08T23:01:17.749047924Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	4851ac155c8cc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   2                   c400d705e85d7       dashboard-metrics-scraper-6ffb444bf9-kpl7m             kubernetes-dashboard
	f53fecc8b57f0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   ecb47aea06a8c       storage-provisioner                                    kube-system
	278e35cc7fbcc       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   42862bb0e247d       kubernetes-dashboard-855c9754f9-ppnz2                  kubernetes-dashboard
	e5d915946b8ea       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   c85acbd1d9d68       kube-proxy-jrvxc                                       kube-system
	1944ceb47b7c9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   ecb47aea06a8c       storage-provisioner                                    kube-system
	06c0a442bfb2b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   83ce0add140c8       busybox                                                default
	4a200e7e0c4c7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   ffc1bea46902e       coredns-66bc5c9577-9xx2v                               kube-system
	8a7be09e8d335       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   6cd4882a59918       kindnet-9vmvl                                          kube-system
	0c79858102e85       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   22c87fc132425       kube-apiserver-default-k8s-diff-port-779490            kube-system
	b17976f27670a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   61e125965e607       kube-scheduler-default-k8s-diff-port-779490            kube-system
	a9d1c9861bc94       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   47997e6ad5742       kube-controller-manager-default-k8s-diff-port-779490   kube-system
	d4862acbb3253       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f4dd3fb2c37a5       etcd-default-k8s-diff-port-779490                      kube-system
	
	
	==> coredns [4a200e7e0c4c7fa3195d199b8f5e47922f16fe844523cd9c5eb8cb9c5b3a5f92] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60649 - 55716 "HINFO IN 4365178978083387005.6166260651569986081. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030115133s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-779490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-779490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=default-k8s-diff-port-779490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_59_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-779490
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 23:01:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:58:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 23:00:56 +0000   Wed, 08 Oct 2025 22:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-779490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0feade24457404cb032ff4236d61a10
	  System UUID:                c1cdfe18-651a-4f09-abda-0497a79b449c
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-9xx2v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-default-k8s-diff-port-779490                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-9vmvl                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-779490             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-779490    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-jrvxc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-779490             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kpl7m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ppnz2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-779490 event: Registered Node default-k8s-diff-port-779490 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-779490 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-779490 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-779490 event: Registered Node default-k8s-diff-port-779490 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:00] overlayfs: idmapped layers are currently not supported
	[  +1.568442] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9] <==
	{"level":"warn","ts":"2025-10-08T23:00:32.530107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.559024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.597259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.641165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.688652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.737969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.766638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.797985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.824340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.854813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.865958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.900881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:32.945886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.008178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.032541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.074946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.115933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.149004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.193948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.239175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.295061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.325114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.353017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.391700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:33.569530Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:30 up  1:44,  0 user,  load average: 3.82, 2.56, 2.04
	Linux default-k8s-diff-port-779490 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8a7be09e8d3357ea5b26e1774372d50014be3d5c01add4f9434273ec80f5272e] <==
	I1008 23:00:37.602162       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 23:00:37.602590       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 23:00:37.602749       1 main.go:148] setting mtu 1500 for CNI 
	I1008 23:00:37.602793       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 23:00:37.602838       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T23:00:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 23:00:37.730536       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 23:00:37.804453       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 23:00:37.804494       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 23:00:37.804631       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 23:01:07.730649       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 23:01:07.805280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1008 23:01:07.805280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 23:01:07.808776       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1008 23:01:09.506641       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 23:01:09.506674       1 metrics.go:72] Registering metrics
	I1008 23:01:09.506746       1 controller.go:711] "Syncing nftables rules"
	I1008 23:01:17.730117       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 23:01:17.730160       1 main.go:301] handling current node
	I1008 23:01:27.730891       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1008 23:01:27.730930       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d] <==
	I1008 23:00:35.987679       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 23:00:36.020937       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 23:00:36.020961       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 23:00:36.021048       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 23:00:36.021108       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 23:00:36.021128       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 23:00:36.021333       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 23:00:36.046363       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1008 23:00:36.046449       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 23:00:36.046529       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 23:00:36.046568       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 23:00:36.057175       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:00:36.063521       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 23:00:36.148208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1008 23:00:36.178098       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 23:00:36.837751       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 23:00:37.497271       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 23:00:37.639470       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 23:00:37.746977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 23:00:37.783088       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 23:00:37.955732       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.40.244"}
	I1008 23:00:38.003242       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.25.176"}
	I1008 23:00:40.699954       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 23:00:40.918410       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 23:00:41.032676       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec] <==
	I1008 23:00:40.519794       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1008 23:00:40.519919       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1008 23:00:40.519980       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1008 23:00:40.520010       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1008 23:00:40.520040       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1008 23:00:40.523393       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 23:00:40.528115       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 23:00:40.531221       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 23:00:40.532742       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1008 23:00:40.532979       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 23:00:40.533073       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 23:00:40.533182       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-779490"
	I1008 23:00:40.533249       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1008 23:00:40.533762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1008 23:00:40.533777       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 23:00:40.533829       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1008 23:00:40.534323       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 23:00:40.535993       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 23:00:40.536064       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1008 23:00:40.536145       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1008 23:00:40.539041       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 23:00:40.551867       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 23:00:40.566701       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:00:40.566785       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 23:00:40.566818       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [e5d915946b8ea944e37566f7106abac224ef11871f731d856aaf37c2bac231dd] <==
	I1008 23:00:38.098615       1 server_linux.go:53] "Using iptables proxy"
	I1008 23:00:38.495486       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 23:00:38.603977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 23:00:38.604104       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 23:00:38.604261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 23:00:38.696390       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 23:00:38.696572       1 server_linux.go:132] "Using iptables Proxier"
	I1008 23:00:38.702280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 23:00:38.702998       1 server.go:527] "Version info" version="v1.34.1"
	I1008 23:00:38.703069       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:38.706920       1 config.go:106] "Starting endpoint slice config controller"
	I1008 23:00:38.707006       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 23:00:38.707321       1 config.go:200] "Starting service config controller"
	I1008 23:00:38.707328       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 23:00:38.707769       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 23:00:38.709713       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 23:00:38.710354       1 config.go:309] "Starting node config controller"
	I1008 23:00:38.710368       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 23:00:38.710375       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 23:00:38.807739       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 23:00:38.807876       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1008 23:00:38.810573       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a] <==
	I1008 23:00:32.699761       1 serving.go:386] Generated self-signed cert in-memory
	I1008 23:00:38.702092       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 23:00:38.702125       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:38.725990       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 23:00:38.726167       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 23:00:38.726316       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 23:00:38.726385       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 23:00:38.727346       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:38.727407       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:38.728865       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:00:38.733621       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:00:38.826977       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 23:00:38.834406       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:00:38.834529       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.071868     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l62hn\" (UniqueName: \"kubernetes.io/projected/4ce6d110-8ead-4b00-9c1c-115488a858ef-kube-api-access-l62hn\") pod \"kubernetes-dashboard-855c9754f9-ppnz2\" (UID: \"4ce6d110-8ead-4b00-9c1c-115488a858ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ppnz2"
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.072706     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ce6d110-8ead-4b00-9c1c-115488a858ef-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ppnz2\" (UID: \"4ce6d110-8ead-4b00-9c1c-115488a858ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ppnz2"
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.072847     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrbr\" (UniqueName: \"kubernetes.io/projected/7d132887-585f-4867-8b5e-8abd1e950fe7-kube-api-access-nbrbr\") pod \"dashboard-metrics-scraper-6ffb444bf9-kpl7m\" (UID: \"7d132887-585f-4867-8b5e-8abd1e950fe7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m"
	Oct 08 23:00:41 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:41.072940     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7d132887-585f-4867-8b5e-8abd1e950fe7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kpl7m\" (UID: \"7d132887-585f-4867-8b5e-8abd1e950fe7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m"
	Oct 08 23:00:42 default-k8s-diff-port-779490 kubelet[777]: W1008 23:00:42.505164     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/crio-42862bb0e247d76eba8244a61dc0a86c7b315762368e28d3edf595d0051efca9 WatchSource:0}: Error finding container 42862bb0e247d76eba8244a61dc0a86c7b315762368e28d3edf595d0051efca9: Status 404 returned error can't find the container with id 42862bb0e247d76eba8244a61dc0a86c7b315762368e28d3edf595d0051efca9
	Oct 08 23:00:42 default-k8s-diff-port-779490 kubelet[777]: W1008 23:00:42.536980     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/74faf5bf01ef70b57ba318840f8fffeec33009503828104a09289b1be78327ca/crio-c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f WatchSource:0}: Error finding container c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f: Status 404 returned error can't find the container with id c400d705e85d71b6caae0b28251b9ea6896ead7d367498002c23881f9c62ce0f
	Oct 08 23:00:56 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:56.150627     777 scope.go:117] "RemoveContainer" containerID="466a1ce652eb5d5063ab5732bd7c585249d47129a71aa0d4d4b3cfcfabf42486"
	Oct 08 23:00:56 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:56.169122     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ppnz2" podStartSLOduration=8.48827871 podStartE2EDuration="16.168368243s" podCreationTimestamp="2025-10-08 23:00:40 +0000 UTC" firstStartedPulling="2025-10-08 23:00:42.515958517 +0000 UTC m=+17.026586303" lastFinishedPulling="2025-10-08 23:00:50.19604805 +0000 UTC m=+24.706675836" observedRunningTime="2025-10-08 23:00:51.156060798 +0000 UTC m=+25.666688584" watchObservedRunningTime="2025-10-08 23:00:56.168368243 +0000 UTC m=+30.678996029"
	Oct 08 23:00:57 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:57.155042     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:00:57 default-k8s-diff-port-779490 kubelet[777]: E1008 23:00:57.155205     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:00:57 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:57.156389     777 scope.go:117] "RemoveContainer" containerID="466a1ce652eb5d5063ab5732bd7c585249d47129a71aa0d4d4b3cfcfabf42486"
	Oct 08 23:00:58 default-k8s-diff-port-779490 kubelet[777]: I1008 23:00:58.160424     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:00:58 default-k8s-diff-port-779490 kubelet[777]: E1008 23:00:58.160597     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:02 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:02.456543     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:01:02 default-k8s-diff-port-779490 kubelet[777]: E1008 23:01:02.456730     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:08 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:08.199270     777 scope.go:117] "RemoveContainer" containerID="1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565"
	Oct 08 23:01:16 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:16.852305     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:01:17 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:17.225824     777 scope.go:117] "RemoveContainer" containerID="8c0e034c67161033c6e852231d6ce020a03f8807c5e9c2eea513706c76d0f8ac"
	Oct 08 23:01:17 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:17.226136     777 scope.go:117] "RemoveContainer" containerID="4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	Oct 08 23:01:17 default-k8s-diff-port-779490 kubelet[777]: E1008 23:01:17.226307     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:22 default-k8s-diff-port-779490 kubelet[777]: I1008 23:01:22.456994     777 scope.go:117] "RemoveContainer" containerID="4851ac155c8ccb03c9a0af39cab91198acaf8f5c04262148f4ac1a0ba47f7395"
	Oct 08 23:01:22 default-k8s-diff-port-779490 kubelet[777]: E1008 23:01:22.457182     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kpl7m_kubernetes-dashboard(7d132887-585f-4867-8b5e-8abd1e950fe7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kpl7m" podUID="7d132887-585f-4867-8b5e-8abd1e950fe7"
	Oct 08 23:01:23 default-k8s-diff-port-779490 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 23:01:24 default-k8s-diff-port-779490 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 23:01:24 default-k8s-diff-port-779490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [278e35cc7fbccaf5c63b64c560388a6a30f3774aced449276cff7421f19bcdfb] <==
	2025/10/08 23:00:50 Using namespace: kubernetes-dashboard
	2025/10/08 23:00:50 Using in-cluster config to connect to apiserver
	2025/10/08 23:00:50 Using secret token for csrf signing
	2025/10/08 23:00:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 23:00:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 23:00:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/08 23:00:50 Generating JWE encryption key
	2025/10/08 23:00:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 23:00:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 23:00:50 Initializing JWE encryption key from synchronized object
	2025/10/08 23:00:50 Creating in-cluster Sidecar client
	2025/10/08 23:00:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:00:50 Serving insecurely on HTTP port: 9090
	2025/10/08 23:01:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:00:50 Starting overwatch
	
	
	==> storage-provisioner [1944ceb47b7c94b2edb63db70a4a7001ea79c19f4c62e47e167fe7d6263a8565] <==
	I1008 23:00:37.582157       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 23:01:07.584034       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f53fecc8b57f03ffafccaf27e308d0f2475f20d0a79b800e28025b87e8e9f33d] <==
	I1008 23:01:08.258751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 23:01:08.274338       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 23:01:08.274396       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 23:01:08.277552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:11.732149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:15.992278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:19.595647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:22.648707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.671524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.679105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:25.679254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 23:01:25.679480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-779490_13b9a621-9c10-4be3-a2c2-77a9e596501a!
	I1008 23:01:25.682213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1621d9ca-2fb2-43ad-b54a-b562c4b49118", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-779490_13b9a621-9c10-4be3-a2c2-77a9e596501a became leader
	W1008 23:01:25.683610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.690671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:25.781770       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-779490_13b9a621-9c10-4be3-a2c2-77a9e596501a!
	W1008 23:01:27.694116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:27.707408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:29.714997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:29.726085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490: exit status 2 (463.88239ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-779490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-825429 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-825429 --alsologtostderr -v=1: exit status 80 (2.17526434s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-825429 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 23:01:25.477867  204851 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:01:25.481763  204851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:01:25.481779  204851 out.go:374] Setting ErrFile to fd 2...
	I1008 23:01:25.481785  204851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:01:25.482082  204851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:01:25.482477  204851 out.go:368] Setting JSON to false
	I1008 23:01:25.482498  204851 mustload.go:65] Loading cluster: embed-certs-825429
	I1008 23:01:25.482932  204851 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:01:25.483391  204851 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:01:25.502300  204851 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:01:25.502733  204851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:01:25.562447  204851 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-08 23:01:25.553130114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:01:25.563165  204851 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-825429 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1008 23:01:25.571296  204851 out.go:179] * Pausing node embed-certs-825429 ... 
	I1008 23:01:25.574848  204851 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:01:25.575199  204851 ssh_runner.go:195] Run: systemctl --version
	I1008 23:01:25.575247  204851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:01:25.597564  204851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:01:25.703625  204851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:01:25.730833  204851 pause.go:52] kubelet running: true
	I1008 23:01:25.730905  204851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:01:26.065236  204851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:01:26.065337  204851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:01:26.155617  204851 cri.go:89] found id: "12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee"
	I1008 23:01:26.155636  204851 cri.go:89] found id: "1f4b81ea4020a6156308c39a4d711c3cae16849618c6cd4a1f14b6b14a1d2393"
	I1008 23:01:26.155641  204851 cri.go:89] found id: "c0fdc682f025c7d581ec1e76c0b8316090b7b1ba1c04a73b7d57e39600677e81"
	I1008 23:01:26.155645  204851 cri.go:89] found id: "2459fd3ba9053672ada5673a83f9c59ab57ebd0c4944a857bd3a952bcd5f7d2f"
	I1008 23:01:26.155648  204851 cri.go:89] found id: "af62cf6b338b21d6b9480139b1d489c4649cd4ade44f1ef4f7af892960632f3d"
	I1008 23:01:26.155652  204851 cri.go:89] found id: "55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175"
	I1008 23:01:26.155655  204851 cri.go:89] found id: "22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061"
	I1008 23:01:26.155658  204851 cri.go:89] found id: "2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3"
	I1008 23:01:26.155661  204851 cri.go:89] found id: "a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0"
	I1008 23:01:26.155667  204851 cri.go:89] found id: "e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	I1008 23:01:26.155670  204851 cri.go:89] found id: "a0ca50beda48eb593a29295444164c508e7747c30dcd8eacd75951f772dc6b39"
	I1008 23:01:26.155673  204851 cri.go:89] found id: ""
	I1008 23:01:26.155723  204851 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:01:26.167748  204851 retry.go:31] will retry after 247.879458ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:26Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:01:26.417855  204851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:01:26.434357  204851 pause.go:52] kubelet running: false
	I1008 23:01:26.434418  204851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:01:26.649305  204851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:01:26.649372  204851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:01:26.774926  204851 cri.go:89] found id: "12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee"
	I1008 23:01:26.774949  204851 cri.go:89] found id: "1f4b81ea4020a6156308c39a4d711c3cae16849618c6cd4a1f14b6b14a1d2393"
	I1008 23:01:26.774954  204851 cri.go:89] found id: "c0fdc682f025c7d581ec1e76c0b8316090b7b1ba1c04a73b7d57e39600677e81"
	I1008 23:01:26.774958  204851 cri.go:89] found id: "2459fd3ba9053672ada5673a83f9c59ab57ebd0c4944a857bd3a952bcd5f7d2f"
	I1008 23:01:26.774961  204851 cri.go:89] found id: "af62cf6b338b21d6b9480139b1d489c4649cd4ade44f1ef4f7af892960632f3d"
	I1008 23:01:26.775025  204851 cri.go:89] found id: "55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175"
	I1008 23:01:26.775032  204851 cri.go:89] found id: "22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061"
	I1008 23:01:26.775035  204851 cri.go:89] found id: "2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3"
	I1008 23:01:26.775038  204851 cri.go:89] found id: "a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0"
	I1008 23:01:26.775088  204851 cri.go:89] found id: "e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	I1008 23:01:26.775096  204851 cri.go:89] found id: "a0ca50beda48eb593a29295444164c508e7747c30dcd8eacd75951f772dc6b39"
	I1008 23:01:26.775116  204851 cri.go:89] found id: ""
	I1008 23:01:26.775232  204851 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:01:26.788445  204851 retry.go:31] will retry after 436.847558ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:26Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:01:27.225813  204851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:01:27.242979  204851 pause.go:52] kubelet running: false
	I1008 23:01:27.243038  204851 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:01:27.470944  204851 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:01:27.471025  204851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:01:27.567844  204851 cri.go:89] found id: "12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee"
	I1008 23:01:27.567868  204851 cri.go:89] found id: "1f4b81ea4020a6156308c39a4d711c3cae16849618c6cd4a1f14b6b14a1d2393"
	I1008 23:01:27.567872  204851 cri.go:89] found id: "c0fdc682f025c7d581ec1e76c0b8316090b7b1ba1c04a73b7d57e39600677e81"
	I1008 23:01:27.567876  204851 cri.go:89] found id: "2459fd3ba9053672ada5673a83f9c59ab57ebd0c4944a857bd3a952bcd5f7d2f"
	I1008 23:01:27.567882  204851 cri.go:89] found id: "af62cf6b338b21d6b9480139b1d489c4649cd4ade44f1ef4f7af892960632f3d"
	I1008 23:01:27.567893  204851 cri.go:89] found id: "55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175"
	I1008 23:01:27.567896  204851 cri.go:89] found id: "22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061"
	I1008 23:01:27.567899  204851 cri.go:89] found id: "2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3"
	I1008 23:01:27.567902  204851 cri.go:89] found id: "a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0"
	I1008 23:01:27.567908  204851 cri.go:89] found id: "e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	I1008 23:01:27.567911  204851 cri.go:89] found id: "a0ca50beda48eb593a29295444164c508e7747c30dcd8eacd75951f772dc6b39"
	I1008 23:01:27.567915  204851 cri.go:89] found id: ""
	I1008 23:01:27.567971  204851 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:01:27.586052  204851 out.go:203] 
	W1008 23:01:27.589947  204851 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:01:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 23:01:27.589969  204851 out.go:285] * 
	* 
	W1008 23:01:27.595461  204851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 23:01:27.597518  204851 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-825429 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-825429
helpers_test.go:243: (dbg) docker inspect embed-certs-825429:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687",
	        "Created": "2025-10-08T22:58:27.270368583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200204,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T23:00:16.089405901Z",
	            "FinishedAt": "2025-10-08T23:00:15.309856407Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/hostname",
	        "HostsPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/hosts",
	        "LogPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687-json.log",
	        "Name": "/embed-certs-825429",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-825429:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-825429",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687",
	                "LowerDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-825429",
	                "Source": "/var/lib/docker/volumes/embed-certs-825429/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-825429",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-825429",
	                "name.minikube.sigs.k8s.io": "embed-certs-825429",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f52ad1db0913ad47db87a08a00349d9a8f510bb792e345b7c5b906a924083f7",
	            "SandboxKey": "/var/run/docker/netns/2f52ad1db091",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-825429": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:22:ba:e2:61:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c72f626705cdbf95a7acf2a18c80971f9e1c7948333cf514c2faeca371944562",
	                    "EndpointID": "871592bb21b06c608cd7bf8bb7de5ad4a057521e4aea7ca06dd4ab31cdf4981c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-825429",
	                        "3489ded6521e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429: exit status 2 (456.895314ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-825429 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-825429 logs -n 25: (1.88850429s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                               │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                             │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-036919                                                                                                                                          │ disable-driver-mounts-036919 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	│ stop    │ -p embed-certs-825429 --alsologtostderr -v=3                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ stop    │ -p default-k8s-diff-port-779490 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-779490 image list --format=json                                                                                                                    │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-779490 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ image   │ embed-certs-825429 image list --format=json                                                                                                                              │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p embed-certs-825429 --alsologtostderr -v=1                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 23:00:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 23:00:17.163938  200735 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:00:17.164058  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164070  200735 out.go:374] Setting ErrFile to fd 2...
	I1008 23:00:17.164076  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164320  200735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:00:17.164684  200735 out.go:368] Setting JSON to false
	I1008 23:00:17.165518  200735 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6168,"bootTime":1759958250,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 23:00:17.165584  200735 start.go:141] virtualization:  
	I1008 23:00:17.170349  200735 out.go:179] * [default-k8s-diff-port-779490] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 23:00:17.173550  200735 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 23:00:17.173606  200735 notify.go:220] Checking for updates...
	I1008 23:00:17.179549  200735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 23:00:17.182394  200735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:17.185318  200735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 23:00:17.188242  200735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 23:00:17.191227  200735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 23:00:17.194561  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:17.195186  200735 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 23:00:17.221784  200735 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 23:00:17.221965  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.290959  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.282099792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.291074  200735 docker.go:318] overlay module found
	I1008 23:00:17.294262  200735 out.go:179] * Using the docker driver based on existing profile
	I1008 23:00:17.297119  200735 start.go:305] selected driver: docker
	I1008 23:00:17.297140  200735 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.297251  200735 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 23:00:17.298023  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.356048  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.346390453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.356372  200735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:17.356415  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:17.356471  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:17.356518  200735 start.go:349] cluster config:
	{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.359826  200735 out.go:179] * Starting "default-k8s-diff-port-779490" primary control-plane node in "default-k8s-diff-port-779490" cluster
	I1008 23:00:17.362672  200735 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 23:00:17.365466  200735 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 23:00:17.368335  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:17.368364  200735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 23:00:17.368384  200735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 23:00:17.368391  200735 cache.go:58] Caching tarball of preloaded images
	I1008 23:00:17.368477  200735 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 23:00:17.368487  200735 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 23:00:17.368593  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.387741  200735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 23:00:17.387766  200735 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 23:00:17.387788  200735 cache.go:232] Successfully downloaded all kic artifacts
	I1008 23:00:17.387813  200735 start.go:360] acquireMachinesLock for default-k8s-diff-port-779490: {Name:mkf9138008d7ef2884518c448a03b33b088d9068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 23:00:17.387870  200735 start.go:364] duration metric: took 34.314µs to acquireMachinesLock for "default-k8s-diff-port-779490"
	I1008 23:00:17.387894  200735 start.go:96] Skipping create...Using existing machine configuration
	I1008 23:00:17.387906  200735 fix.go:54] fixHost starting: 
	I1008 23:00:17.388165  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.405667  200735 fix.go:112] recreateIfNeeded on default-k8s-diff-port-779490: state=Stopped err=<nil>
	W1008 23:00:17.405698  200735 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 23:00:16.057868  200074 out.go:252] * Restarting existing docker container for "embed-certs-825429" ...
	I1008 23:00:16.057965  200074 cli_runner.go:164] Run: docker start embed-certs-825429
	I1008 23:00:16.315950  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:16.335815  200074 kic.go:430] container "embed-certs-825429" state is running.
	I1008 23:00:16.336208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:16.356036  200074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/config.json ...
	I1008 23:00:16.356262  200074 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:16.356315  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:16.378830  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:16.379148  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:16.379157  200074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:16.380409  200074 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59024->127.0.0.1:33081: read: connection reset by peer
	I1008 23:00:19.529381  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.529407  200074 ubuntu.go:182] provisioning hostname "embed-certs-825429"
	I1008 23:00:19.529470  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.548688  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.549089  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.549126  200074 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825429 && echo "embed-certs-825429" | sudo tee /etc/hostname
	I1008 23:00:19.704942  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.705029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.723786  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.724093  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.724110  200074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825429' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825429/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825429' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:19.870310  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:19.870379  200074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:19.870406  200074 ubuntu.go:190] setting up certificates
	I1008 23:00:19.870417  200074 provision.go:84] configureAuth start
	I1008 23:00:19.870501  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:19.888221  200074 provision.go:143] copyHostCerts
	I1008 23:00:19.888292  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:19.888316  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:19.888394  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:19.888499  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:19.888508  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:19.888537  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:19.888603  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:19.888615  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:19.888643  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:19.888697  200074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825429 san=[127.0.0.1 192.168.76.2 embed-certs-825429 localhost minikube]
	I1008 23:00:17.408820  200735 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-779490" ...
	I1008 23:00:17.408898  200735 cli_runner.go:164] Run: docker start default-k8s-diff-port-779490
	I1008 23:00:17.666806  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.691387  200735 kic.go:430] container "default-k8s-diff-port-779490" state is running.
	I1008 23:00:17.691764  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:17.715368  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.715595  200735 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:17.715865  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:17.740298  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:17.740619  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:17.740636  200735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:17.741357  200735 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 23:00:20.909388  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:20.909415  200735 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-779490"
	I1008 23:00:20.909477  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:20.926770  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:20.927074  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:20.927096  200735 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-779490 && echo "default-k8s-diff-port-779490" | sudo tee /etc/hostname
	I1008 23:00:21.093286  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:21.093383  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:21.122816  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.123125  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:21.123144  200735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-779490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-779490/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-779490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:21.274338  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:21.274367  200735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:21.274399  200735 ubuntu.go:190] setting up certificates
	I1008 23:00:21.274412  200735 provision.go:84] configureAuth start
	I1008 23:00:21.274479  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:21.301901  200735 provision.go:143] copyHostCerts
	I1008 23:00:21.301972  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:21.301995  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:21.302061  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:21.302175  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:21.302187  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:21.302212  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:21.302280  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:21.302297  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:21.302320  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:21.302377  200735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-779490 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-779490 localhost minikube]
	I1008 23:00:22.045829  200735 provision.go:177] copyRemoteCerts
	I1008 23:00:22.045958  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:22.046043  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.065464  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:20.814951  200074 provision.go:177] copyRemoteCerts
	I1008 23:00:20.815017  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:20.815059  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:20.834587  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:20.947002  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:20.966672  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 23:00:20.987841  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:00:21.017825  200074 provision.go:87] duration metric: took 1.147384041s to configureAuth
	I1008 23:00:21.017855  200074 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:21.018073  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:21.018178  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.038971  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.039282  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:21.039304  200074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:21.410917  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:21.410937  200074 machine.go:96] duration metric: took 5.054666132s to provisionDockerMachine
	I1008 23:00:21.410948  200074 start.go:293] postStartSetup for "embed-certs-825429" (driver="docker")
	I1008 23:00:21.410958  200074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:21.411025  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:21.411063  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.439350  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.543094  200074 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:21.547406  200074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:21.547435  200074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:21.547450  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:21.547507  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:21.547597  200074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:21.547700  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:21.556609  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:21.585243  200074 start.go:296] duration metric: took 174.278532ms for postStartSetup
	I1008 23:00:21.585334  200074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:21.585378  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.621333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.735318  200074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:21.743106  200074 fix.go:56] duration metric: took 5.706738194s for fixHost
	I1008 23:00:21.743134  200074 start.go:83] releasing machines lock for "embed-certs-825429", held for 5.70679646s
	I1008 23:00:21.743208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:21.767422  200074 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:21.767474  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.767704  200074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:21.767778  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.807518  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.808257  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:22.023792  200074 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:22.032065  200074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:22.086835  200074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:22.095791  200074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:22.095870  200074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:22.106263  200074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:22.106289  200074 start.go:495] detecting cgroup driver to use...
	I1008 23:00:22.106323  200074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:22.106377  200074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:22.126344  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:22.142497  200074 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:22.142563  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:22.158960  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:22.174798  200074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:22.323493  200074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:22.466670  200074 docker.go:234] disabling docker service ...
	I1008 23:00:22.466740  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:22.483900  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:22.498887  200074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:22.646149  200074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:22.804808  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:22.821564  200074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:22.839222  200074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:22.839285  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.851109  200074 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:22.851182  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.863916  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.878286  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.887691  200074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:22.897074  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.909548  200074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.919602  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.930018  200074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:22.938657  200074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:22.946980  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.134756  200074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:23.291036  200074 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:23.291115  200074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:23.295899  200074 start.go:563] Will wait 60s for crictl version
	I1008 23:00:23.295972  200074 ssh_runner.go:195] Run: which crictl
	I1008 23:00:23.300513  200074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:23.339721  200074 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:23.339809  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.382887  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.427225  200074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:22.179705  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:22.201073  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 23:00:22.231111  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 23:00:22.265814  200735 provision.go:87] duration metric: took 991.378792ms to configureAuth
	I1008 23:00:22.265882  200735 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:22.266132  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:22.266293  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.285804  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:22.286122  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:22.286137  200735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:22.656376  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:22.656462  200735 machine.go:96] duration metric: took 4.940857891s to provisionDockerMachine
	I1008 23:00:22.656490  200735 start.go:293] postStartSetup for "default-k8s-diff-port-779490" (driver="docker")
	I1008 23:00:22.656532  200735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:22.656635  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:22.656703  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.681602  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.795033  200735 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:22.799606  200735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:22.799632  200735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:22.799644  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:22.799704  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:22.799788  200735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:22.799891  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:22.809604  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:22.832880  200735 start.go:296] duration metric: took 176.344915ms for postStartSetup
	I1008 23:00:22.833082  200735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:22.833170  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.857779  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.964061  200735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:22.969468  200735 fix.go:56] duration metric: took 5.581560799s for fixHost
	I1008 23:00:22.969491  200735 start.go:83] releasing machines lock for "default-k8s-diff-port-779490", held for 5.581607766s
	I1008 23:00:22.969557  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:22.988681  200735 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:22.988742  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.988958  200735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:22.989020  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:23.026248  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.043081  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.248291  200735 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:23.255759  200735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:23.326213  200735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:23.335019  200735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:23.335098  200735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:23.344495  200735 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:23.344539  200735 start.go:495] detecting cgroup driver to use...
	I1008 23:00:23.344575  200735 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:23.344639  200735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:23.367326  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:23.380944  200735 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:23.381008  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:23.398756  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:23.412634  200735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:23.559101  200735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:23.743425  200735 docker.go:234] disabling docker service ...
	I1008 23:00:23.743510  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:23.767092  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:23.784102  200735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:23.992289  200735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:24.197499  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:24.213564  200735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:24.241135  200735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:24.241200  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.259960  200735 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:24.260094  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.270690  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.284851  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.296200  200735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:24.304654  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.313931  200735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.322480  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.333103  200735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:24.342318  200735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:24.350381  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:24.494463  200735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:24.666167  200735 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:24.666337  200735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:24.670699  200735 start.go:563] Will wait 60s for crictl version
	I1008 23:00:24.670769  200735 ssh_runner.go:195] Run: which crictl
	I1008 23:00:24.674726  200735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:24.721851  200735 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:24.721939  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.775722  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.813408  200735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:23.430030  200074 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:23.456528  200074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:23.460989  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.482225  200074 kubeadm.go:883] updating cluster {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:23.482358  200074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:23.482421  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.531360  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.531387  200074 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:23.531462  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.569867  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.569936  200074 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:23.569960  200074 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 23:00:23.570103  200074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-825429 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:23.570200  200074 ssh_runner.go:195] Run: crio config
	I1008 23:00:23.663769  200074 cni.go:84] Creating CNI manager for ""
	I1008 23:00:23.663807  200074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:23.663827  200074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:23.663851  200074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825429 NodeName:embed-certs-825429 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:23.664032  200074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825429"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:23.664188  200074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:23.673332  200074 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:23.673424  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:23.682110  200074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1008 23:00:23.698014  200074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:23.714241  200074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1008 23:00:23.730391  200074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:23.734792  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.747684  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.928606  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:23.946415  200074 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429 for IP: 192.168.76.2
	I1008 23:00:23.946441  200074 certs.go:195] generating shared ca certs ...
	I1008 23:00:23.946461  200074 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:23.946635  200074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:23.946693  200074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:23.946706  200074 certs.go:257] generating profile certs ...
	I1008 23:00:23.946793  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key
	I1008 23:00:23.946881  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3
	I1008 23:00:23.946947  200074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key
	I1008 23:00:23.947094  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:23.947129  200074 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:23.947142  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:23.947170  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:23.947193  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:23.947224  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:23.947272  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:23.947891  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:23.971323  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:23.996302  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:24.027533  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:24.067397  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 23:00:24.113587  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 23:00:24.171396  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:24.233317  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:24.281842  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:24.312837  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:24.337367  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:24.364278  200074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:24.380163  200074 ssh_runner.go:195] Run: openssl version
	I1008 23:00:24.402171  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:24.411218  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420653  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420720  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.477008  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:24.486489  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:24.495742  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500273  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500338  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.545507  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:24.554243  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:24.568916  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573351  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573418  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.618186  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:24.629747  200074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:24.634953  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:24.681889  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:24.725355  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:24.834276  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:24.932960  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:25.074571  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:25.193985  200074 kubeadm.go:400] StartCluster: {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:25.194067  200074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:25.194141  200074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:25.269452  200074 cri.go:89] found id: "55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175"
	I1008 23:00:25.269472  200074 cri.go:89] found id: "22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061"
	I1008 23:00:25.269477  200074 cri.go:89] found id: "2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3"
	I1008 23:00:25.269481  200074 cri.go:89] found id: "a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0"
	I1008 23:00:25.269498  200074 cri.go:89] found id: ""
	I1008 23:00:25.269546  200074 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:25.281173  200074 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:25Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:25.281268  200074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:25.322177  200074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:25.322195  200074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:25.322243  200074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:25.362965  200074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:25.363367  200074 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-825429" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.363461  200074 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-825429" cluster setting kubeconfig missing "embed-certs-825429" context setting]
	I1008 23:00:25.363775  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.365003  200074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:25.380609  200074 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1008 23:00:25.380686  200074 kubeadm.go:601] duration metric: took 58.482086ms to restartPrimaryControlPlane
	I1008 23:00:25.380710  200074 kubeadm.go:402] duration metric: took 186.742153ms to StartCluster
	I1008 23:00:25.380754  200074 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.380828  200074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.381889  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.382365  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:25.382428  200074 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:25.382473  200074 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:25.382797  200074 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825429"
	I1008 23:00:25.382821  200074 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-825429"
	W1008 23:00:25.382827  200074 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:25.382848  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.382884  200074 addons.go:69] Setting dashboard=true in profile "embed-certs-825429"
	I1008 23:00:25.382903  200074 addons.go:238] Setting addon dashboard=true in "embed-certs-825429"
	W1008 23:00:25.382909  200074 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:25.382947  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.383306  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383427  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383753  200074 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825429"
	I1008 23:00:25.383775  200074 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825429"
	I1008 23:00:25.384049  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.389699  200074 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:25.397744  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.427867  200074 addons.go:238] Setting addon default-storageclass=true in "embed-certs-825429"
	W1008 23:00:25.427894  200074 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:25.427918  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.428350  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.462323  200074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:25.462386  200074 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:25.465277  200074 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.465378  200074 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.465394  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:25.465457  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.468927  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:25.468950  200074 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:25.469011  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.506947  200074 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.506970  200074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:25.507029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.520333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.546607  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.556438  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:24.816796  200735 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:24.843704  200735 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:24.847692  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:24.861363  200735 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:24.861469  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:24.861518  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.910267  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.910349  200735 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:24.910448  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.962779  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.962801  200735 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:24.962808  200735 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1008 23:00:24.962923  200735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-779490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:24.962999  200735 ssh_runner.go:195] Run: crio config
	I1008 23:00:25.062075  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:25.062100  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:25.062118  200735 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:25.062149  200735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-779490 NodeName:default-k8s-diff-port-779490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:25.062285  200735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-779490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:25.062361  200735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:25.074284  200735 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:25.074371  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:25.088117  200735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1008 23:00:25.106557  200735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:25.129827  200735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1008 23:00:25.149881  200735 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:25.154629  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:25.168582  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.460517  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.501961  200735 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490 for IP: 192.168.85.2
	I1008 23:00:25.501997  200735 certs.go:195] generating shared ca certs ...
	I1008 23:00:25.502015  200735 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.502157  200735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:25.502198  200735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:25.502204  200735 certs.go:257] generating profile certs ...
	I1008 23:00:25.502286  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key
	I1008 23:00:25.502350  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765
	I1008 23:00:25.502386  200735 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key
	I1008 23:00:25.502503  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:25.502530  200735 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:25.502538  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:25.502563  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:25.502588  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:25.502609  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:25.502650  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:25.503267  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:25.592800  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:25.646744  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:25.708575  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:25.781282  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 23:00:25.818906  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 23:00:25.877017  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:25.917052  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:25.947665  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:25.998644  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:26.025504  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:26.067106  200735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:26.088824  200735 ssh_runner.go:195] Run: openssl version
	I1008 23:00:26.100299  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:26.113073  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120724  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120843  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.190335  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:26.198935  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:26.210820  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218162  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218283  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.346366  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:26.373203  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:26.389547  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402275  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402419  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.505353  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:26.520251  200735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:26.536115  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:26.692708  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:26.825179  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:26.994307  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:27.130884  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:27.230322  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:27.336269  200735 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:27.336415  200735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:27.336525  200735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:27.395074  200735 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:00:27.395140  200735 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:00:27.395160  200735 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:00:27.395184  200735 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:00:27.395221  200735 cri.go:89] found id: ""
	I1008 23:00:27.395308  200735 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:27.426213  200735 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:27Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:27.426366  200735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:27.451284  200735 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:27.451347  200735 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:27.451438  200735 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:27.470047  200735 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:27.470958  200735 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-779490" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.471537  200735 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-779490" cluster setting kubeconfig missing "default-k8s-diff-port-779490" context setting]
	I1008 23:00:27.472341  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.474373  200735 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:27.502661  200735 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 23:00:27.502691  200735 kubeadm.go:601] duration metric: took 51.324103ms to restartPrimaryControlPlane
	I1008 23:00:27.502701  200735 kubeadm.go:402] duration metric: took 166.440913ms to StartCluster
	I1008 23:00:27.502716  200735 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.502780  200735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.504255  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.504498  200735 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:27.504946  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:27.504993  200735 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:27.505173  200735 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505205  200735 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505273  200735 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:27.505309  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.505228  200735 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505496  200735 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505504  200735 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:27.505523  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.506138  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.505236  200735 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.506586  200735 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-779490"
	I1008 23:00:27.506810  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.507164  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.508033  200735 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:27.511128  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:27.571481  200735 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.571510  200735 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:27.571533  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.571937  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.577698  200735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:27.577791  200735 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:27.580753  200735 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.875806  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.933368  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:25.933388  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:25.967177  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.989730  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.995808  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:25.995886  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:26.064075  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:26.064158  200074 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:26.159420  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:26.159495  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:26.259916  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:26.260013  200074 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:26.366694  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:26.366756  200074 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:26.415309  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:26.415386  200074 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:26.450896  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:26.450973  200074 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:26.486667  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:26.486690  200074 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:26.525078  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:27.580864  200735 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:27.580880  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:27.580952  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.583763  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:27.583795  200735 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:27.583868  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.614715  200735 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:27.614741  200735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:27.614805  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.638478  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.657760  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.663405  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.965178  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:28.011190  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:28.042994  200735 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:28.104531  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:28.104603  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:28.169664  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:28.169736  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:28.180277  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:28.323258  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:28.323335  200735 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:28.459418  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:28.459558  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:28.517653  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:28.517677  200735 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:28.543581  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:28.543607  200735 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:28.568175  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:28.568200  200735 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:28.591552  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:28.591579  200735 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:28.624882  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:28.624907  200735 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:28.682187  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:36.554642  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.678801563s)
	I1008 23:00:36.554692  200074 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.587441579s)
	I1008 23:00:36.554723  200074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.555033  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.565226311s)
	I1008 23:00:36.555298  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.030193657s)
	I1008 23:00:36.558520  200074 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-825429 addons enable metrics-server
	
	I1008 23:00:36.588258  200074 node_ready.go:49] node "embed-certs-825429" is "Ready"
	I1008 23:00:36.588291  200074 node_ready.go:38] duration metric: took 33.550217ms for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.588304  200074 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:36.588362  200074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:36.604701  200074 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:35.825467  200735 node_ready.go:49] node "default-k8s-diff-port-779490" is "Ready"
	I1008 23:00:35.825499  200735 node_ready.go:38] duration metric: took 7.782419961s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:35.825513  200735 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:35.825575  200735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:38.105427  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.094147032s)
	I1008 23:00:38.105534  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.925184377s)
	I1008 23:00:38.105652  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.423327121s)
	I1008 23:00:38.105678  200735 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.280089174s)
	I1008 23:00:38.106178  200735 api_server.go:72] duration metric: took 10.601654805s to wait for apiserver process to appear ...
	I1008 23:00:38.106187  200735 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:38.106203  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.109033  200735 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-779490 addons enable metrics-server
	
	I1008 23:00:38.130970  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 23:00:38.131050  200735 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 23:00:38.161807  200735 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:36.607526  200074 addons.go:514] duration metric: took 11.225039641s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:36.616796  200074 api_server.go:72] duration metric: took 11.234244971s to wait for apiserver process to appear ...
	I1008 23:00:36.616820  200074 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:36.616839  200074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 23:00:36.626167  200074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 23:00:36.627242  200074 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:36.627269  200074 api_server.go:131] duration metric: took 10.441367ms to wait for apiserver health ...
	I1008 23:00:36.627278  200074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:36.631675  200074 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:36.631714  200074 system_pods.go:61] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.631722  200074 system_pods.go:61] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.631729  200074 system_pods.go:61] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.631735  200074 system_pods.go:61] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.631742  200074 system_pods.go:61] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.631750  200074 system_pods.go:61] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.631757  200074 system_pods.go:61] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.631768  200074 system_pods.go:61] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.631774  200074 system_pods.go:74] duration metric: took 4.489884ms to wait for pod list to return data ...
	I1008 23:00:36.631788  200074 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:36.634659  200074 default_sa.go:45] found service account: "default"
	I1008 23:00:36.634682  200074 default_sa.go:55] duration metric: took 2.887786ms for default service account to be created ...
	I1008 23:00:36.634693  200074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:36.638046  200074 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:36.638083  200074 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.638092  200074 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.638097  200074 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.638104  200074 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.638116  200074 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.638121  200074 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.638127  200074 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.638134  200074 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.638141  200074 system_pods.go:126] duration metric: took 3.443001ms to wait for k8s-apps to be running ...
	I1008 23:00:36.638155  200074 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:36.638211  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:36.653778  200074 system_svc.go:56] duration metric: took 15.614806ms WaitForService to wait for kubelet
	I1008 23:00:36.653803  200074 kubeadm.go:586] duration metric: took 11.271256497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:36.653821  200074 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:36.657347  200074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:36.657379  200074 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:36.657391  200074 node_conditions.go:105] duration metric: took 3.563849ms to run NodePressure ...
	I1008 23:00:36.657403  200074 start.go:241] waiting for startup goroutines ...
	I1008 23:00:36.657411  200074 start.go:246] waiting for cluster config update ...
	I1008 23:00:36.657423  200074 start.go:255] writing updated cluster config ...
	I1008 23:00:36.657783  200074 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:36.670223  200074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:36.682756  200074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:38.706369  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:00:38.164701  200735 addons.go:514] duration metric: took 10.659691491s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:38.607275  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.622438  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1008 23:00:38.624605  200735 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:38.624637  200735 api_server.go:131] duration metric: took 518.442986ms to wait for apiserver health ...
	I1008 23:00:38.624648  200735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:38.630538  200735 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:38.630582  200735 system_pods.go:61] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.630619  200735 system_pods.go:61] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.630633  200735 system_pods.go:61] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.630641  200735 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.630649  200735 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.630659  200735 system_pods.go:61] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.630668  200735 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.630688  200735 system_pods.go:61] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.630701  200735 system_pods.go:74] duration metric: took 6.047091ms to wait for pod list to return data ...
	I1008 23:00:38.630708  200735 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:38.636880  200735 default_sa.go:45] found service account: "default"
	I1008 23:00:38.636933  200735 default_sa.go:55] duration metric: took 6.183914ms for default service account to be created ...
	I1008 23:00:38.636950  200735 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:38.641529  200735 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:38.641561  200735 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.641570  200735 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.641575  200735 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.641672  200735 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.641691  200735 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.641703  200735 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.641710  200735 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.641719  200735 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.641727  200735 system_pods.go:126] duration metric: took 4.769699ms to wait for k8s-apps to be running ...
	I1008 23:00:38.641752  200735 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:38.641843  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:38.657309  200735 system_svc.go:56] duration metric: took 15.563712ms WaitForService to wait for kubelet
	I1008 23:00:38.657341  200735 kubeadm.go:586] duration metric: took 11.152818203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:38.657392  200735 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:38.660817  200735 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:38.660857  200735 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:38.660900  200735 node_conditions.go:105] duration metric: took 3.495048ms to run NodePressure ...
	I1008 23:00:38.660913  200735 start.go:241] waiting for startup goroutines ...
	I1008 23:00:38.660925  200735 start.go:246] waiting for cluster config update ...
	I1008 23:00:38.660937  200735 start.go:255] writing updated cluster config ...
	I1008 23:00:38.661285  200735 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:38.665450  200735 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:38.681495  200735 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:40.702946  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:41.192108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.194681  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:45.689665  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.188107  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:45.195152  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:47.694917  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:50.202214  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:47.201882  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:49.202683  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:51.246618  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:52.690218  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:55.188303  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:53.690293  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:56.191657  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:57.694147  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:00.215108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:58.688765  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:00.690867  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:02.690268  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:05.191132  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:03.190806  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:05.687338  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:07.691307  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	I1008 23:01:09.189198  200735 pod_ready.go:94] pod "coredns-66bc5c9577-9xx2v" is "Ready"
	I1008 23:01:09.189221  200735 pod_ready.go:86] duration metric: took 30.507687365s for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.193878  200735 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.198549  200735 pod_ready.go:94] pod "etcd-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.198580  200735 pod_ready.go:86] duration metric: took 4.672663ms for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.202726  200735 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.216341  200735 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.216428  200735 pod_ready.go:86] duration metric: took 13.672156ms for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.221298  200735 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.385313  200735 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.385345  200735 pod_ready.go:86] duration metric: took 164.020409ms for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.585312  200735 pod_ready.go:83] waiting for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.986012  200735 pod_ready.go:94] pod "kube-proxy-jrvxc" is "Ready"
	I1008 23:01:09.986041  200735 pod_ready.go:86] duration metric: took 400.698358ms for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.190147  200735 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587493  200735 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:10.587525  200735 pod_ready.go:86] duration metric: took 397.349388ms for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587538  200735 pod_ready.go:40] duration metric: took 31.922052481s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:10.662421  200735 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:10.665744  200735 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-779490" cluster and "default" namespace by default
	W1008 23:01:07.689010  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:09.689062  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:11.693197  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:01:12.189762  200074 pod_ready.go:94] pod "coredns-66bc5c9577-s7kcb" is "Ready"
	I1008 23:01:12.189792  200074 pod_ready.go:86] duration metric: took 35.506963864s for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.192723  200074 pod_ready.go:83] waiting for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.197407  200074 pod_ready.go:94] pod "etcd-embed-certs-825429" is "Ready"
	I1008 23:01:12.197430  200074 pod_ready.go:86] duration metric: took 4.678735ms for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.200027  200074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.204611  200074 pod_ready.go:94] pod "kube-apiserver-embed-certs-825429" is "Ready"
	I1008 23:01:12.204642  200074 pod_ready.go:86] duration metric: took 4.593655ms for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.206885  200074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.387130  200074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-825429" is "Ready"
	I1008 23:01:12.387178  200074 pod_ready.go:86] duration metric: took 180.247707ms for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.587705  200074 pod_ready.go:83] waiting for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.987048  200074 pod_ready.go:94] pod "kube-proxy-86wtc" is "Ready"
	I1008 23:01:12.987076  200074 pod_ready.go:86] duration metric: took 399.301634ms for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.187216  200074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587259  200074 pod_ready.go:94] pod "kube-scheduler-embed-certs-825429" is "Ready"
	I1008 23:01:13.587290  200074 pod_ready.go:86] duration metric: took 400.047489ms for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587304  200074 pod_ready.go:40] duration metric: took 36.916992323s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:13.655798  200074 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:13.659151  200074 out.go:179] * Done! kubectl is now configured to use "embed-certs-825429" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.733112288Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6731a31d-66c5-40bc-a51b-07aea9973a4d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.734409118Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4890a075-bb2e-4f7c-a508-a1a983c7abe7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.734655964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.743728015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.744071405Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/15330773ab5d098777ca9a161b7337acce8302e3dc668fc1eba96cdb3e15d2e3/merged/etc/passwd: no such file or directory"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.744175119Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/15330773ab5d098777ca9a161b7337acce8302e3dc668fc1eba96cdb3e15d2e3/merged/etc/group: no such file or directory"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.744516432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.761137207Z" level=info msg="Created container 12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee: kube-system/storage-provisioner/storage-provisioner" id=4890a075-bb2e-4f7c-a508-a1a983c7abe7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.762300842Z" level=info msg="Starting container: 12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee" id=4edbbe42-8a43-4ba3-92b9-3a56b9ee35b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.763859263Z" level=info msg="Started container" PID=1653 containerID=12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee description=kube-system/storage-provisioner/storage-provisioner id=4edbbe42-8a43-4ba3-92b9-3a56b9ee35b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f237529722e28e84d3fcd2fe897a1a246519233cbecd9c8e6c1e0b704ed6a207
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.048923037Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.057504021Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.057748636Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.057792837Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.061708789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.061745162Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.061768867Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.065388339Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.065426501Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.065452356Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.069018895Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.069058814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.069085103Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.072945292Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.072983733Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	12860fa60b2b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   f237529722e28       storage-provisioner                          kube-system
	e36f057891620       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   ed850752d4fc0       dashboard-metrics-scraper-6ffb444bf9-vlzgh   kubernetes-dashboard
	a0ca50beda48e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   d1f9c825ea729       kubernetes-dashboard-855c9754f9-449f2        kubernetes-dashboard
	1713b13b43200       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   dd73a51124473       busybox                                      default
	1f4b81ea4020a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   2ad91090b1a14       kindnet-kjmsw                                kube-system
	c0fdc682f025c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   f237529722e28       storage-provisioner                          kube-system
	2459fd3ba9053       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   f14c118ee24f8       coredns-66bc5c9577-s7kcb                     kube-system
	af62cf6b338b2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   82a1216eebda5       kube-proxy-86wtc                             kube-system
	55041cc30a387       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   78d9dcc061217       kube-controller-manager-embed-certs-825429   kube-system
	22eefec3ff76d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   601b04229c52a       etcd-embed-certs-825429                      kube-system
	2b4397a485127       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   249e597d2e4ef       kube-scheduler-embed-certs-825429            kube-system
	a4d4c06603233       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   20909c08a76c8       kube-apiserver-embed-certs-825429            kube-system
	
	
	==> coredns [2459fd3ba9053672ada5673a83f9c59ab57ebd0c4944a857bd3a952bcd5f7d2f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58948 - 44797 "HINFO IN 8650173283547477972.6731509134613730560. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029570413s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-825429
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-825429
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=embed-certs-825429
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_58_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:58:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-825429
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 23:01:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:59:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-825429
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 b956126660f04039803926356103464c
	  System UUID:                9bcebe6b-6a1d-4fec-b0e0-57daefae99b1
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-s7kcb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 etcd-embed-certs-825429                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-kjmsw                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-825429             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-embed-certs-825429    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-86wtc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-825429             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vlzgh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-449f2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Warning  CgroupV1                 2m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m42s (x8 over 2m43s)  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m42s (x8 over 2m43s)  kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m42s (x8 over 2m43s)  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m28s                  node-controller  Node embed-certs-825429 event: Registered Node embed-certs-825429 in Controller
	  Normal   NodeReady                104s                   kubelet          Node embed-certs-825429 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node embed-certs-825429 event: Registered Node embed-certs-825429 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:00] overlayfs: idmapped layers are currently not supported
	[  +1.568442] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061] <==
	{"level":"warn","ts":"2025-10-08T23:00:30.204657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.225862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.265940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.326322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.381761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.394317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.433840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.469836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.505961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.546202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.592434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.617782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.651112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.687556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.718620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.744667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.781043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.815642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.881824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.916012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.980112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.033020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.066427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.088758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.162880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39026","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:29 up  1:43,  0 user,  load average: 3.46, 2.47, 2.01
	Linux embed-certs-825429 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f4b81ea4020a6156308c39a4d711c3cae16849618c6cd4a1f14b6b14a1d2393] <==
	I1008 23:00:34.720046       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 23:00:34.720288       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1008 23:00:34.720440       1 main.go:148] setting mtu 1500 for CNI 
	I1008 23:00:34.720453       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 23:00:34.720470       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T23:00:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 23:00:35.048950       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 23:00:35.049052       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 23:00:35.049091       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 23:00:35.110303       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 23:01:05.047109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1008 23:01:05.050660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 23:01:05.050660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 23:01:05.059206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1008 23:01:06.649467       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 23:01:06.649503       1 metrics.go:72] Registering metrics
	I1008 23:01:06.649577       1 controller.go:711] "Syncing nftables rules"
	I1008 23:01:15.048498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 23:01:15.048616       1 main.go:301] handling current node
	I1008 23:01:25.053698       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 23:01:25.053799       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0] <==
	I1008 23:00:33.204921       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 23:00:33.205045       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 23:00:33.205188       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 23:00:33.213933       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 23:00:33.249276       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:00:33.253347       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 23:00:33.253363       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 23:00:33.261174       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 23:00:33.261260       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 23:00:33.272672       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 23:00:33.272898       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 23:00:33.276115       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1008 23:00:33.280402       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 23:00:33.341065       1 cache.go:39] Caches are synced for autoregister controller
	I1008 23:00:33.407446       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 23:00:33.575525       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 23:00:35.932416       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 23:00:36.070784       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 23:00:36.196754       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 23:00:36.225205       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 23:00:36.393242       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.229.96"}
	I1008 23:00:36.431072       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.111.251"}
	I1008 23:00:38.429912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 23:00:38.677328       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 23:00:38.813533       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175] <==
	I1008 23:00:38.275286       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 23:00:38.275318       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 23:00:38.275398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1008 23:00:38.275475       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 23:00:38.275534       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1008 23:00:38.275613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1008 23:00:38.275260       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 23:00:38.279481       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1008 23:00:38.279537       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1008 23:00:38.279568       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1008 23:00:38.279573       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1008 23:00:38.279579       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1008 23:00:38.291948       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 23:00:38.292236       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 23:00:38.309309       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 23:00:38.313287       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 23:00:38.320137       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 23:00:38.321323       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 23:00:38.321729       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 23:00:38.321772       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 23:00:38.330557       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 23:00:38.348198       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:00:38.368059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:00:38.368088       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 23:00:38.368097       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [af62cf6b338b21d6b9480139b1d489c4649cd4ade44f1ef4f7af892960632f3d] <==
	I1008 23:00:35.075435       1 server_linux.go:53] "Using iptables proxy"
	I1008 23:00:35.509745       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 23:00:35.689842       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 23:00:35.689875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1008 23:00:35.689941       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 23:00:35.867302       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 23:00:35.867360       1 server_linux.go:132] "Using iptables Proxier"
	I1008 23:00:35.880487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 23:00:35.880908       1 server.go:527] "Version info" version="v1.34.1"
	I1008 23:00:35.881145       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:35.887159       1 config.go:200] "Starting service config controller"
	I1008 23:00:35.887239       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 23:00:35.887716       1 config.go:106] "Starting endpoint slice config controller"
	I1008 23:00:35.900945       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 23:00:35.894959       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 23:00:35.901090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 23:00:35.895701       1 config.go:309] "Starting node config controller"
	I1008 23:00:35.901101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 23:00:35.901107       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 23:00:35.988757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 23:00:36.002093       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 23:00:36.002194       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3] <==
	I1008 23:00:28.486660       1 serving.go:386] Generated self-signed cert in-memory
	W1008 23:00:33.140826       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 23:00:33.143980       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 23:00:33.144025       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 23:00:33.144037       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 23:00:33.265261       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 23:00:33.265293       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:33.291490       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 23:00:33.291582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:33.291599       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:33.291615       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 23:00:33.398024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 23:00:38 embed-certs-825429 kubelet[781]: I1008 23:00:38.952829     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c76l\" (UniqueName: \"kubernetes.io/projected/3f6ecdcd-1eed-428f-85ed-68596e1d32da-kube-api-access-8c76l\") pod \"dashboard-metrics-scraper-6ffb444bf9-vlzgh\" (UID: \"3f6ecdcd-1eed-428f-85ed-68596e1d32da\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh"
	Oct 08 23:00:39 embed-certs-825429 kubelet[781]: W1008 23:00:39.202311     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/crio-ed850752d4fc0b8e82597c844895b885487f9f0affb0d019bacd7e286d3f5192 WatchSource:0}: Error finding container ed850752d4fc0b8e82597c844895b885487f9f0affb0d019bacd7e286d3f5192: Status 404 returned error can't find the container with id ed850752d4fc0b8e82597c844895b885487f9f0affb0d019bacd7e286d3f5192
	Oct 08 23:00:39 embed-certs-825429 kubelet[781]: W1008 23:00:39.221133     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/crio-d1f9c825ea7290a2b6a521dd94f9aed48ea62f400ef0ecac48d44ad545463bf0 WatchSource:0}: Error finding container d1f9c825ea7290a2b6a521dd94f9aed48ea62f400ef0ecac48d44ad545463bf0: Status 404 returned error can't find the container with id d1f9c825ea7290a2b6a521dd94f9aed48ea62f400ef0ecac48d44ad545463bf0
	Oct 08 23:00:41 embed-certs-825429 kubelet[781]: I1008 23:00:41.666360     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 08 23:00:44 embed-certs-825429 kubelet[781]: I1008 23:00:44.639449     781 scope.go:117] "RemoveContainer" containerID="b8618432eeec0450cbffed5126ab8e7591b525ce49d1d4b1235f818ca747fff0"
	Oct 08 23:00:45 embed-certs-825429 kubelet[781]: I1008 23:00:45.645812     781 scope.go:117] "RemoveContainer" containerID="b8618432eeec0450cbffed5126ab8e7591b525ce49d1d4b1235f818ca747fff0"
	Oct 08 23:00:45 embed-certs-825429 kubelet[781]: I1008 23:00:45.646767     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:00:45 embed-certs-825429 kubelet[781]: E1008 23:00:45.646926     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:00:46 embed-certs-825429 kubelet[781]: I1008 23:00:46.650818     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:00:46 embed-certs-825429 kubelet[781]: E1008 23:00:46.654443     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:00:49 embed-certs-825429 kubelet[781]: I1008 23:00:49.139263     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:00:49 embed-certs-825429 kubelet[781]: E1008 23:00:49.139444     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.280122     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.720019     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.720431     781 scope.go:117] "RemoveContainer" containerID="e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: E1008 23:01:02.720616     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.743011     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-449f2" podStartSLOduration=12.099079102 podStartE2EDuration="24.742991143s" podCreationTimestamp="2025-10-08 23:00:38 +0000 UTC" firstStartedPulling="2025-10-08 23:00:39.227767525 +0000 UTC m=+15.263283873" lastFinishedPulling="2025-10-08 23:00:51.871679566 +0000 UTC m=+27.907195914" observedRunningTime="2025-10-08 23:00:52.715381506 +0000 UTC m=+28.750897879" watchObservedRunningTime="2025-10-08 23:01:02.742991143 +0000 UTC m=+38.778507499"
	Oct 08 23:01:05 embed-certs-825429 kubelet[781]: I1008 23:01:05.731268     781 scope.go:117] "RemoveContainer" containerID="c0fdc682f025c7d581ec1e76c0b8316090b7b1ba1c04a73b7d57e39600677e81"
	Oct 08 23:01:09 embed-certs-825429 kubelet[781]: I1008 23:01:09.140457     781 scope.go:117] "RemoveContainer" containerID="e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	Oct 08 23:01:09 embed-certs-825429 kubelet[781]: E1008 23:01:09.141075     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:21 embed-certs-825429 kubelet[781]: I1008 23:01:21.280064     781 scope.go:117] "RemoveContainer" containerID="e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	Oct 08 23:01:21 embed-certs-825429 kubelet[781]: E1008 23:01:21.280286     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:25 embed-certs-825429 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 23:01:26 embed-certs-825429 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 23:01:26 embed-certs-825429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a0ca50beda48eb593a29295444164c508e7747c30dcd8eacd75951f772dc6b39] <==
	2025/10/08 23:00:51 Using namespace: kubernetes-dashboard
	2025/10/08 23:00:51 Using in-cluster config to connect to apiserver
	2025/10/08 23:00:51 Using secret token for csrf signing
	2025/10/08 23:00:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 23:00:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 23:00:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/08 23:00:51 Generating JWE encryption key
	2025/10/08 23:00:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 23:00:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 23:00:52 Initializing JWE encryption key from synchronized object
	2025/10/08 23:00:52 Creating in-cluster Sidecar client
	2025/10/08 23:00:52 Serving insecurely on HTTP port: 9090
	2025/10/08 23:00:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:01:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:00:51 Starting overwatch
	
	
	==> storage-provisioner [12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee] <==
	I1008 23:01:05.776970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 23:01:05.791827       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 23:01:05.791887       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 23:01:05.794235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:09.250133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:13.511542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:17.111128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:20.165060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:23.186914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:23.192460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:23.192610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 23:01:23.192790       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-825429_aa9a6673-8932-43ab-8ada-b617def1371c!
	I1008 23:01:23.193878       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"deb8d6fa-4d23-4078-b8a3-474c7c204563", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-825429_aa9a6673-8932-43ab-8ada-b617def1371c became leader
	W1008 23:01:23.201930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:23.211355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:23.293811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-825429_aa9a6673-8932-43ab-8ada-b617def1371c!
	W1008 23:01:25.214574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.220322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:27.223836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:27.235178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:29.246214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:29.260947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c0fdc682f025c7d581ec1e76c0b8316090b7b1ba1c04a73b7d57e39600677e81] <==
	I1008 23:00:35.091281       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 23:01:05.093083       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825429 -n embed-certs-825429
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825429 -n embed-certs-825429: exit status 2 (476.093997ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-825429 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-825429
helpers_test.go:243: (dbg) docker inspect embed-certs-825429:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687",
	        "Created": "2025-10-08T22:58:27.270368583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200204,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T23:00:16.089405901Z",
	            "FinishedAt": "2025-10-08T23:00:15.309856407Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/hostname",
	        "HostsPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/hosts",
	        "LogPath": "/var/lib/docker/containers/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687-json.log",
	        "Name": "/embed-certs-825429",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-825429:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-825429",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687",
	                "LowerDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15d32fbfdaf0408547903211c726445950e1518e636878da63cc08f3965a235f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-825429",
	                "Source": "/var/lib/docker/volumes/embed-certs-825429/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-825429",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-825429",
	                "name.minikube.sigs.k8s.io": "embed-certs-825429",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f52ad1db0913ad47db87a08a00349d9a8f510bb792e345b7c5b906a924083f7",
	            "SandboxKey": "/var/run/docker/netns/2f52ad1db091",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-825429": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:22:ba:e2:61:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c72f626705cdbf95a7acf2a18c80971f9e1c7948333cf514c2faeca371944562",
	                    "EndpointID": "871592bb21b06c608cd7bf8bb7de5ad4a057521e4aea7ca06dd4ab31cdf4981c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-825429",
	                        "3489ded6521e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429: exit status 2 (455.201887ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-825429 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-825429 logs -n 25: (1.74301217s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-939665 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │                     │
	│ stop    │ -p no-preload-939665 --alsologtostderr -v=3                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ addons  │ enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:57 UTC │
	│ start   │ -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:57 UTC │ 08 Oct 25 22:58 UTC │
	│ image   │ no-preload-939665 image list --format=json                                                                                                                               │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ pause   │ -p no-preload-939665 --alsologtostderr -v=1                                                                                                                              │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │                     │
	│ ssh     │ force-systemd-flag-385382 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                             │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                     │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-036919                                                                                                                                          │ disable-driver-mounts-036919 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	│ stop    │ -p embed-certs-825429 --alsologtostderr -v=3                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ stop    │ -p default-k8s-diff-port-779490 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-779490 image list --format=json                                                                                                                    │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-779490 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ image   │ embed-certs-825429 image list --format=json                                                                                                                              │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p embed-certs-825429 --alsologtostderr -v=1                                                                                                                             │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 23:00:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 23:00:17.163938  200735 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:00:17.164058  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164070  200735 out.go:374] Setting ErrFile to fd 2...
	I1008 23:00:17.164076  200735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:00:17.164320  200735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:00:17.164684  200735 out.go:368] Setting JSON to false
	I1008 23:00:17.165518  200735 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6168,"bootTime":1759958250,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 23:00:17.165584  200735 start.go:141] virtualization:  
	I1008 23:00:17.170349  200735 out.go:179] * [default-k8s-diff-port-779490] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 23:00:17.173550  200735 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 23:00:17.173606  200735 notify.go:220] Checking for updates...
	I1008 23:00:17.179549  200735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 23:00:17.182394  200735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:17.185318  200735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 23:00:17.188242  200735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 23:00:17.191227  200735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 23:00:17.194561  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:17.195186  200735 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 23:00:17.221784  200735 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 23:00:17.221965  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.290959  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.282099792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.291074  200735 docker.go:318] overlay module found
	I1008 23:00:17.294262  200735 out.go:179] * Using the docker driver based on existing profile
	I1008 23:00:17.297119  200735 start.go:305] selected driver: docker
	I1008 23:00:17.297140  200735 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.297251  200735 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 23:00:17.298023  200735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:00:17.356048  200735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-08 23:00:17.346390453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:00:17.356372  200735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:17.356415  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:17.356471  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:17.356518  200735 start.go:349] cluster config:
	{Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:17.359826  200735 out.go:179] * Starting "default-k8s-diff-port-779490" primary control-plane node in "default-k8s-diff-port-779490" cluster
	I1008 23:00:17.362672  200735 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 23:00:17.365466  200735 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 23:00:17.368335  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:17.368364  200735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 23:00:17.368384  200735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 23:00:17.368391  200735 cache.go:58] Caching tarball of preloaded images
	I1008 23:00:17.368477  200735 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 23:00:17.368487  200735 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 23:00:17.368593  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.387741  200735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 23:00:17.387766  200735 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 23:00:17.387788  200735 cache.go:232] Successfully downloaded all kic artifacts
	I1008 23:00:17.387813  200735 start.go:360] acquireMachinesLock for default-k8s-diff-port-779490: {Name:mkf9138008d7ef2884518c448a03b33b088d9068 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 23:00:17.387870  200735 start.go:364] duration metric: took 34.314µs to acquireMachinesLock for "default-k8s-diff-port-779490"
	I1008 23:00:17.387894  200735 start.go:96] Skipping create...Using existing machine configuration
	I1008 23:00:17.387906  200735 fix.go:54] fixHost starting: 
	I1008 23:00:17.388165  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.405667  200735 fix.go:112] recreateIfNeeded on default-k8s-diff-port-779490: state=Stopped err=<nil>
	W1008 23:00:17.405698  200735 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 23:00:16.057868  200074 out.go:252] * Restarting existing docker container for "embed-certs-825429" ...
	I1008 23:00:16.057965  200074 cli_runner.go:164] Run: docker start embed-certs-825429
	I1008 23:00:16.315950  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:16.335815  200074 kic.go:430] container "embed-certs-825429" state is running.
	I1008 23:00:16.336208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:16.356036  200074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/config.json ...
	I1008 23:00:16.356262  200074 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:16.356315  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:16.378830  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:16.379148  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:16.379157  200074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:16.380409  200074 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59024->127.0.0.1:33081: read: connection reset by peer
	I1008 23:00:19.529381  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.529407  200074 ubuntu.go:182] provisioning hostname "embed-certs-825429"
	I1008 23:00:19.529470  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.548688  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.549089  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.549126  200074 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-825429 && echo "embed-certs-825429" | sudo tee /etc/hostname
	I1008 23:00:19.704942  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-825429
	
	I1008 23:00:19.705029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:19.723786  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:19.724093  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:19.724110  200074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-825429' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-825429/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-825429' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:19.870310  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:19.870379  200074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:19.870406  200074 ubuntu.go:190] setting up certificates
	I1008 23:00:19.870417  200074 provision.go:84] configureAuth start
	I1008 23:00:19.870501  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:19.888221  200074 provision.go:143] copyHostCerts
	I1008 23:00:19.888292  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:19.888316  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:19.888394  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:19.888499  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:19.888508  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:19.888537  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:19.888603  200074 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:19.888615  200074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:19.888643  200074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:19.888697  200074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.embed-certs-825429 san=[127.0.0.1 192.168.76.2 embed-certs-825429 localhost minikube]
	I1008 23:00:17.408820  200735 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-779490" ...
	I1008 23:00:17.408898  200735 cli_runner.go:164] Run: docker start default-k8s-diff-port-779490
	I1008 23:00:17.666806  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:17.691387  200735 kic.go:430] container "default-k8s-diff-port-779490" state is running.
	I1008 23:00:17.691764  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:17.715368  200735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/config.json ...
	I1008 23:00:17.715595  200735 machine.go:93] provisionDockerMachine start ...
	I1008 23:00:17.715865  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:17.740298  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:17.740619  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:17.740636  200735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:00:17.741357  200735 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 23:00:20.909388  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:20.909415  200735 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-779490"
	I1008 23:00:20.909477  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:20.926770  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:20.927074  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:20.927096  200735 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-779490 && echo "default-k8s-diff-port-779490" | sudo tee /etc/hostname
	I1008 23:00:21.093286  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-779490
	
	I1008 23:00:21.093383  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:21.122816  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.123125  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:21.123144  200735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-779490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-779490/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-779490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:00:21.274338  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:00:21.274367  200735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:00:21.274399  200735 ubuntu.go:190] setting up certificates
	I1008 23:00:21.274412  200735 provision.go:84] configureAuth start
	I1008 23:00:21.274479  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:21.301901  200735 provision.go:143] copyHostCerts
	I1008 23:00:21.301972  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:00:21.301995  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:00:21.302061  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:00:21.302175  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:00:21.302187  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:00:21.302212  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:00:21.302280  200735 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:00:21.302297  200735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:00:21.302320  200735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:00:21.302377  200735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-779490 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-779490 localhost minikube]
	I1008 23:00:22.045829  200735 provision.go:177] copyRemoteCerts
	I1008 23:00:22.045958  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:22.046043  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.065464  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:20.814951  200074 provision.go:177] copyRemoteCerts
	I1008 23:00:20.815017  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:00:20.815059  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:20.834587  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:20.947002  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:20.966672  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 23:00:20.987841  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:00:21.017825  200074 provision.go:87] duration metric: took 1.147384041s to configureAuth
	I1008 23:00:21.017855  200074 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:21.018073  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:21.018178  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.038971  200074 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:21.039282  200074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I1008 23:00:21.039304  200074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:21.410917  200074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:21.410937  200074 machine.go:96] duration metric: took 5.054666132s to provisionDockerMachine
	I1008 23:00:21.410948  200074 start.go:293] postStartSetup for "embed-certs-825429" (driver="docker")
	I1008 23:00:21.410958  200074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:21.411025  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:21.411063  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.439350  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.543094  200074 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:21.547406  200074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:21.547435  200074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:21.547450  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:21.547507  200074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:21.547597  200074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:21.547700  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:21.556609  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:21.585243  200074 start.go:296] duration metric: took 174.278532ms for postStartSetup
	I1008 23:00:21.585334  200074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:21.585378  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.621333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.735318  200074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:21.743106  200074 fix.go:56] duration metric: took 5.706738194s for fixHost
	I1008 23:00:21.743134  200074 start.go:83] releasing machines lock for "embed-certs-825429", held for 5.70679646s
	I1008 23:00:21.743208  200074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-825429
	I1008 23:00:21.767422  200074 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:21.767474  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.767704  200074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:21.767778  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:21.807518  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:21.808257  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:22.023792  200074 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:22.032065  200074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:22.086835  200074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:22.095791  200074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:22.095870  200074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:22.106263  200074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:22.106289  200074 start.go:495] detecting cgroup driver to use...
	I1008 23:00:22.106323  200074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:22.106377  200074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:22.126344  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:22.142497  200074 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:22.142563  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:22.158960  200074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:22.174798  200074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:22.323493  200074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:22.466670  200074 docker.go:234] disabling docker service ...
	I1008 23:00:22.466740  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:22.483900  200074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:22.498887  200074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:22.646149  200074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:22.804808  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:22.821564  200074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:22.839222  200074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:22.839285  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.851109  200074 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:22.851182  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.863916  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.878286  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.887691  200074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:22.897074  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.909548  200074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.919602  200074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:22.930018  200074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:22.938657  200074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:22.946980  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.134756  200074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:23.291036  200074 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:23.291115  200074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:23.295899  200074 start.go:563] Will wait 60s for crictl version
	I1008 23:00:23.295972  200074 ssh_runner.go:195] Run: which crictl
	I1008 23:00:23.300513  200074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:23.339721  200074 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:23.339809  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.382887  200074 ssh_runner.go:195] Run: crio --version
	I1008 23:00:23.427225  200074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:22.179705  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:00:22.201073  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1008 23:00:22.231111  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 23:00:22.265814  200735 provision.go:87] duration metric: took 991.378792ms to configureAuth
	I1008 23:00:22.265882  200735 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:00:22.266132  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:22.266293  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.285804  200735 main.go:141] libmachine: Using SSH client type: native
	I1008 23:00:22.286122  200735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33086 <nil> <nil>}
	I1008 23:00:22.286137  200735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:00:22.656376  200735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:00:22.656462  200735 machine.go:96] duration metric: took 4.940857891s to provisionDockerMachine
	I1008 23:00:22.656490  200735 start.go:293] postStartSetup for "default-k8s-diff-port-779490" (driver="docker")
	I1008 23:00:22.656532  200735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:00:22.656635  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:00:22.656703  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.681602  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.795033  200735 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:00:22.799606  200735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:00:22.799632  200735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:00:22.799644  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:00:22.799704  200735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:00:22.799788  200735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:00:22.799891  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:00:22.809604  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:22.832880  200735 start.go:296] duration metric: took 176.344915ms for postStartSetup
	I1008 23:00:22.833082  200735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:00:22.833170  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.857779  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:22.964061  200735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:00:22.969468  200735 fix.go:56] duration metric: took 5.581560799s for fixHost
	I1008 23:00:22.969491  200735 start.go:83] releasing machines lock for "default-k8s-diff-port-779490", held for 5.581607766s
	I1008 23:00:22.969557  200735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-779490
	I1008 23:00:22.988681  200735 ssh_runner.go:195] Run: cat /version.json
	I1008 23:00:22.988742  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:22.988958  200735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:00:22.989020  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:23.026248  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.043081  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:23.248291  200735 ssh_runner.go:195] Run: systemctl --version
	I1008 23:00:23.255759  200735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:00:23.326213  200735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:00:23.335019  200735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:00:23.335098  200735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:00:23.344495  200735 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:00:23.344539  200735 start.go:495] detecting cgroup driver to use...
	I1008 23:00:23.344575  200735 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:00:23.344639  200735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:00:23.367326  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:00:23.380944  200735 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:00:23.381008  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:00:23.398756  200735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:00:23.412634  200735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:00:23.559101  200735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:00:23.743425  200735 docker.go:234] disabling docker service ...
	I1008 23:00:23.743510  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:00:23.767092  200735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:00:23.784102  200735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:00:23.992289  200735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:00:24.197499  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:00:24.213564  200735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:00:24.241135  200735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:00:24.241200  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.259960  200735 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:00:24.260094  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.270690  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.284851  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.296200  200735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:00:24.304654  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.313931  200735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.322480  200735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:00:24.333103  200735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:00:24.342318  200735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:00:24.350381  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:24.494463  200735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:00:24.666167  200735 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:00:24.666337  200735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:00:24.670699  200735 start.go:563] Will wait 60s for crictl version
	I1008 23:00:24.670769  200735 ssh_runner.go:195] Run: which crictl
	I1008 23:00:24.674726  200735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:00:24.721851  200735 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:00:24.721939  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.775722  200735 ssh_runner.go:195] Run: crio --version
	I1008 23:00:24.813408  200735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:00:23.430030  200074 cli_runner.go:164] Run: docker network inspect embed-certs-825429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:23.456528  200074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:23.460989  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.482225  200074 kubeadm.go:883] updating cluster {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:23.482358  200074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:23.482421  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.531360  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.531387  200074 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:23.531462  200074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:23.569867  200074 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:23.569936  200074 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:23.569960  200074 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 23:00:23.570103  200074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-825429 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:23.570200  200074 ssh_runner.go:195] Run: crio config
	I1008 23:00:23.663769  200074 cni.go:84] Creating CNI manager for ""
	I1008 23:00:23.663807  200074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:23.663827  200074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:23.663851  200074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-825429 NodeName:embed-certs-825429 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:23.664032  200074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-825429"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:23.664188  200074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:23.673332  200074 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:23.673424  200074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:23.682110  200074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1008 23:00:23.698014  200074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:23.714241  200074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1008 23:00:23.730391  200074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:23.734792  200074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:23.747684  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:23.928606  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:23.946415  200074 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429 for IP: 192.168.76.2
	I1008 23:00:23.946441  200074 certs.go:195] generating shared ca certs ...
	I1008 23:00:23.946461  200074 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:23.946635  200074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:23.946693  200074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:23.946706  200074 certs.go:257] generating profile certs ...
	I1008 23:00:23.946793  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/client.key
	I1008 23:00:23.946881  200074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key.6dc562e3
	I1008 23:00:23.946947  200074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key
	I1008 23:00:23.947094  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:23.947129  200074 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:23.947142  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:23.947170  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:23.947193  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:23.947224  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:23.947272  200074 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:23.947891  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:23.971323  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:23.996302  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:24.027533  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:24.067397  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1008 23:00:24.113587  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 23:00:24.171396  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:24.233317  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/embed-certs-825429/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:24.281842  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:24.312837  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:24.337367  200074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:24.364278  200074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:24.380163  200074 ssh_runner.go:195] Run: openssl version
	I1008 23:00:24.402171  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:24.411218  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420653  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.420720  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:24.477008  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:24.486489  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:24.495742  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500273  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.500338  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:24.545507  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:24.554243  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:24.568916  200074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573351  200074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.573418  200074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:24.618186  200074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:24.629747  200074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:24.634953  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:24.681889  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:24.725355  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:24.834276  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:24.932960  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:25.074571  200074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:25.193985  200074 kubeadm.go:400] StartCluster: {Name:embed-certs-825429 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-825429 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:25.194067  200074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:25.194141  200074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:25.269452  200074 cri.go:89] found id: "55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175"
	I1008 23:00:25.269472  200074 cri.go:89] found id: "22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061"
	I1008 23:00:25.269477  200074 cri.go:89] found id: "2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3"
	I1008 23:00:25.269481  200074 cri.go:89] found id: "a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0"
	I1008 23:00:25.269498  200074 cri.go:89] found id: ""
	I1008 23:00:25.269546  200074 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:25.281173  200074 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:25Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:25.281268  200074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:25.322177  200074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:25.322195  200074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:25.322243  200074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:25.362965  200074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:25.363367  200074 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-825429" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.363461  200074 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-825429" cluster setting kubeconfig missing "embed-certs-825429" context setting]
	I1008 23:00:25.363775  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.365003  200074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:25.380609  200074 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1008 23:00:25.380686  200074 kubeadm.go:601] duration metric: took 58.482086ms to restartPrimaryControlPlane
	I1008 23:00:25.380710  200074 kubeadm.go:402] duration metric: took 186.742153ms to StartCluster
	I1008 23:00:25.380754  200074 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.380828  200074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:25.381889  200074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.382365  200074 config.go:182] Loaded profile config "embed-certs-825429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:25.382428  200074 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:25.382473  200074 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:25.382797  200074 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-825429"
	I1008 23:00:25.382821  200074 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-825429"
	W1008 23:00:25.382827  200074 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:25.382848  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.382884  200074 addons.go:69] Setting dashboard=true in profile "embed-certs-825429"
	I1008 23:00:25.382903  200074 addons.go:238] Setting addon dashboard=true in "embed-certs-825429"
	W1008 23:00:25.382909  200074 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:25.382947  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.383306  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383427  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.383753  200074 addons.go:69] Setting default-storageclass=true in profile "embed-certs-825429"
	I1008 23:00:25.383775  200074 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-825429"
	I1008 23:00:25.384049  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.389699  200074 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:25.397744  200074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.427867  200074 addons.go:238] Setting addon default-storageclass=true in "embed-certs-825429"
	W1008 23:00:25.427894  200074 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:25.427918  200074 host.go:66] Checking if "embed-certs-825429" exists ...
	I1008 23:00:25.428350  200074 cli_runner.go:164] Run: docker container inspect embed-certs-825429 --format={{.State.Status}}
	I1008 23:00:25.462323  200074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:25.462386  200074 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:25.465277  200074 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.465378  200074 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.465394  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:25.465457  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.468927  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:25.468950  200074 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:25.469011  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.506947  200074 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.506970  200074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:25.507029  200074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-825429
	I1008 23:00:25.520333  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.546607  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:25.556438  200074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/embed-certs-825429/id_rsa Username:docker}
	I1008 23:00:24.816796  200735 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-779490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:00:24.843704  200735 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 23:00:24.847692  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:24.861363  200735 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:00:24.861469  200735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:00:24.861518  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.910267  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.910349  200735 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:00:24.910448  200735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:00:24.962779  200735 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:00:24.962801  200735 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:00:24.962808  200735 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1008 23:00:24.962923  200735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-779490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:00:24.962999  200735 ssh_runner.go:195] Run: crio config
	I1008 23:00:25.062075  200735 cni.go:84] Creating CNI manager for ""
	I1008 23:00:25.062100  200735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:00:25.062118  200735 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:00:25.062149  200735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-779490 NodeName:default-k8s-diff-port-779490 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:00:25.062285  200735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-779490"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:00:25.062361  200735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:00:25.074284  200735 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:00:25.074371  200735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:00:25.088117  200735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1008 23:00:25.106557  200735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:00:25.129827  200735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1008 23:00:25.149881  200735 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:00:25.154629  200735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:00:25.168582  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:25.460517  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.501961  200735 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490 for IP: 192.168.85.2
	I1008 23:00:25.501997  200735 certs.go:195] generating shared ca certs ...
	I1008 23:00:25.502015  200735 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:25.502157  200735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:00:25.502198  200735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:00:25.502204  200735 certs.go:257] generating profile certs ...
	I1008 23:00:25.502286  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.key
	I1008 23:00:25.502350  200735 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key.e9b65765
	I1008 23:00:25.502386  200735 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key
	I1008 23:00:25.502503  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:00:25.502530  200735 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:00:25.502538  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:00:25.502563  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:00:25.502588  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:00:25.502609  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:00:25.502650  200735 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:00:25.503267  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:00:25.592800  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:00:25.646744  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:00:25.708575  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:00:25.781282  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1008 23:00:25.818906  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 23:00:25.877017  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:00:25.917052  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:00:25.947665  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:00:25.998644  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:00:26.025504  200735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:00:26.067106  200735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:00:26.088824  200735 ssh_runner.go:195] Run: openssl version
	I1008 23:00:26.100299  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:00:26.113073  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120724  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.120843  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:00:26.190335  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:00:26.198935  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:00:26.210820  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218162  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.218283  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:00:26.346366  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:00:26.373203  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:00:26.389547  200735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402275  200735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.402419  200735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:00:26.505353  200735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:00:26.520251  200735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:00:26.536115  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:00:26.692708  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:00:26.825179  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:00:26.994307  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:00:27.130884  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:00:27.230322  200735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:00:27.336269  200735 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-779490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-779490 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:00:27.336415  200735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:00:27.336525  200735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:00:27.395074  200735 cri.go:89] found id: "0c79858102e85baa84c831afba4b7cc1c114f88a71fcf89c612559e0af787c7d"
	I1008 23:00:27.395140  200735 cri.go:89] found id: "b17976f27670a7423b42609ee4b2fa61871aed6dc1b36ac12ea09290dd17a12a"
	I1008 23:00:27.395160  200735 cri.go:89] found id: "a9d1c9861bc942173a82f22686131e4acf4d5525642733cf2918e0d8f84288ec"
	I1008 23:00:27.395184  200735 cri.go:89] found id: "d4862acbb325388728a58d351abb076457e0683b050f22eebca41887246090c9"
	I1008 23:00:27.395221  200735 cri.go:89] found id: ""
	I1008 23:00:27.395308  200735 ssh_runner.go:195] Run: sudo runc list -f json
	W1008 23:00:27.426213  200735 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:00:27Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:00:27.426366  200735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:00:27.451284  200735 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:00:27.451347  200735 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:00:27.451438  200735 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:00:27.470047  200735 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:00:27.470958  200735 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-779490" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.471537  200735 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-779490" cluster setting kubeconfig missing "default-k8s-diff-port-779490" context setting]
	I1008 23:00:27.472341  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.474373  200735 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:00:27.502661  200735 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 23:00:27.502691  200735 kubeadm.go:601] duration metric: took 51.324103ms to restartPrimaryControlPlane
	I1008 23:00:27.502701  200735 kubeadm.go:402] duration metric: took 166.440913ms to StartCluster
	I1008 23:00:27.502716  200735 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.502780  200735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:00:27.504255  200735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:00:27.504498  200735 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:00:27.504946  200735 config.go:182] Loaded profile config "default-k8s-diff-port-779490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:00:27.504993  200735 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:00:27.505173  200735 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505205  200735 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505273  200735 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:00:27.505309  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.505228  200735 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.505496  200735 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.505504  200735 addons.go:247] addon dashboard should already be in state true
	I1008 23:00:27.505523  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.506138  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.505236  200735 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-779490"
	I1008 23:00:27.506586  200735 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-779490"
	I1008 23:00:27.506810  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.507164  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.508033  200735 out.go:179] * Verifying Kubernetes components...
	I1008 23:00:27.511128  200735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:00:27.571481  200735 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-779490"
	W1008 23:00:27.571510  200735 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:00:27.571533  200735 host.go:66] Checking if "default-k8s-diff-port-779490" exists ...
	I1008 23:00:27.571937  200735 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-779490 --format={{.State.Status}}
	I1008 23:00:27.577698  200735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:00:27.577791  200735 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:00:27.580753  200735 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:00:25.875806  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:25.933368  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:25.933388  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:25.967177  200074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:25.989730  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:25.995808  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:25.995886  200074 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:26.064075  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:26.064158  200074 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:26.159420  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:26.159495  200074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:26.259916  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:26.260013  200074 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:26.366694  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:26.366756  200074 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:26.415309  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:26.415386  200074 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:26.450896  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:26.450973  200074 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:26.486667  200074 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:26.486690  200074 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:26.525078  200074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:27.580864  200735 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:27.580880  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:00:27.580952  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.583763  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:00:27.583795  200735 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:00:27.583868  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.614715  200735 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:27.614741  200735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:00:27.614805  200735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-779490
	I1008 23:00:27.638478  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.657760  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.663405  200735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33086 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/default-k8s-diff-port-779490/id_rsa Username:docker}
	I1008 23:00:27.965178  200735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:00:28.011190  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:00:28.042994  200735 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:28.104531  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:00:28.104603  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:00:28.169664  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:00:28.169736  200735 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:00:28.180277  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:00:28.323258  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:00:28.323335  200735 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:00:28.459418  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:00:28.459558  200735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:00:28.517653  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:00:28.517677  200735 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:00:28.543581  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:00:28.543607  200735 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:00:28.568175  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:00:28.568200  200735 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:00:28.591552  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:00:28.591579  200735 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:00:28.624882  200735 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:28.624907  200735 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:00:28.682187  200735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:00:36.554642  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.678801563s)
	I1008 23:00:36.554692  200074 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.587441579s)
	I1008 23:00:36.554723  200074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.555033  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.565226311s)
	I1008 23:00:36.555298  200074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.030193657s)
	I1008 23:00:36.558520  200074 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-825429 addons enable metrics-server
	
	I1008 23:00:36.588258  200074 node_ready.go:49] node "embed-certs-825429" is "Ready"
	I1008 23:00:36.588291  200074 node_ready.go:38] duration metric: took 33.550217ms for node "embed-certs-825429" to be "Ready" ...
	I1008 23:00:36.588304  200074 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:36.588362  200074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:36.604701  200074 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:35.825467  200735 node_ready.go:49] node "default-k8s-diff-port-779490" is "Ready"
	I1008 23:00:35.825499  200735 node_ready.go:38] duration metric: took 7.782419961s for node "default-k8s-diff-port-779490" to be "Ready" ...
	I1008 23:00:35.825513  200735 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:00:35.825575  200735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:00:38.105427  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.094147032s)
	I1008 23:00:38.105534  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.925184377s)
	I1008 23:00:38.105652  200735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.423327121s)
	I1008 23:00:38.105678  200735 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.280089174s)
	I1008 23:00:38.106178  200735 api_server.go:72] duration metric: took 10.601654805s to wait for apiserver process to appear ...
	I1008 23:00:38.106187  200735 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:38.106203  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.109033  200735 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-779490 addons enable metrics-server
	
	I1008 23:00:38.130970  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 23:00:38.131050  200735 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 23:00:38.161807  200735 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1008 23:00:36.607526  200074 addons.go:514] duration metric: took 11.225039641s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:36.616796  200074 api_server.go:72] duration metric: took 11.234244971s to wait for apiserver process to appear ...
	I1008 23:00:36.616820  200074 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:00:36.616839  200074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1008 23:00:36.626167  200074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1008 23:00:36.627242  200074 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:36.627269  200074 api_server.go:131] duration metric: took 10.441367ms to wait for apiserver health ...
	I1008 23:00:36.627278  200074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:36.631675  200074 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:36.631714  200074 system_pods.go:61] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.631722  200074 system_pods.go:61] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.631729  200074 system_pods.go:61] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.631735  200074 system_pods.go:61] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.631742  200074 system_pods.go:61] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.631750  200074 system_pods.go:61] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.631757  200074 system_pods.go:61] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.631768  200074 system_pods.go:61] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.631774  200074 system_pods.go:74] duration metric: took 4.489884ms to wait for pod list to return data ...
	I1008 23:00:36.631788  200074 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:36.634659  200074 default_sa.go:45] found service account: "default"
	I1008 23:00:36.634682  200074 default_sa.go:55] duration metric: took 2.887786ms for default service account to be created ...
	I1008 23:00:36.634693  200074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:36.638046  200074 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:36.638083  200074 system_pods.go:89] "coredns-66bc5c9577-s7kcb" [5656ffce-aa1a-4e17-9d19-a3a2eeeba35f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:36.638092  200074 system_pods.go:89] "etcd-embed-certs-825429" [a320fa7e-9f2b-4b0f-9c1c-6665c6cac5ce] Running
	I1008 23:00:36.638097  200074 system_pods.go:89] "kindnet-kjmsw" [eb5b265b-7be1-4870-af88-23dfe38926c9] Running
	I1008 23:00:36.638104  200074 system_pods.go:89] "kube-apiserver-embed-certs-825429" [5a3c8f7b-671d-41e5-8068-7ddce042a943] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:36.638116  200074 system_pods.go:89] "kube-controller-manager-embed-certs-825429" [99c17d07-e1e1-427d-91a1-801f42784b89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:36.638121  200074 system_pods.go:89] "kube-proxy-86wtc" [3ccf3390-491f-4ac1-abd7-15bed7e0fdc3] Running
	I1008 23:00:36.638127  200074 system_pods.go:89] "kube-scheduler-embed-certs-825429" [a61cf77e-78cd-47bb-9619-42353f7e4afa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:36.638134  200074 system_pods.go:89] "storage-provisioner" [ccb25fa2-fa55-465c-9fcc-194f56db4ad4] Running
	I1008 23:00:36.638141  200074 system_pods.go:126] duration metric: took 3.443001ms to wait for k8s-apps to be running ...
	I1008 23:00:36.638155  200074 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:36.638211  200074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:36.653778  200074 system_svc.go:56] duration metric: took 15.614806ms WaitForService to wait for kubelet
	I1008 23:00:36.653803  200074 kubeadm.go:586] duration metric: took 11.271256497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:36.653821  200074 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:36.657347  200074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:36.657379  200074 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:36.657391  200074 node_conditions.go:105] duration metric: took 3.563849ms to run NodePressure ...
	I1008 23:00:36.657403  200074 start.go:241] waiting for startup goroutines ...
	I1008 23:00:36.657411  200074 start.go:246] waiting for cluster config update ...
	I1008 23:00:36.657423  200074 start.go:255] writing updated cluster config ...
	I1008 23:00:36.657783  200074 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:36.670223  200074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:36.682756  200074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:38.706369  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:00:38.164701  200735 addons.go:514] duration metric: took 10.659691491s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1008 23:00:38.607275  200735 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1008 23:00:38.622438  200735 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1008 23:00:38.624605  200735 api_server.go:141] control plane version: v1.34.1
	I1008 23:00:38.624637  200735 api_server.go:131] duration metric: took 518.442986ms to wait for apiserver health ...
	I1008 23:00:38.624648  200735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:00:38.630538  200735 system_pods.go:59] 8 kube-system pods found
	I1008 23:00:38.630582  200735 system_pods.go:61] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.630619  200735 system_pods.go:61] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.630633  200735 system_pods.go:61] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.630641  200735 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.630649  200735 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.630659  200735 system_pods.go:61] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.630668  200735 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.630688  200735 system_pods.go:61] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.630701  200735 system_pods.go:74] duration metric: took 6.047091ms to wait for pod list to return data ...
	I1008 23:00:38.630708  200735 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:00:38.636880  200735 default_sa.go:45] found service account: "default"
	I1008 23:00:38.636933  200735 default_sa.go:55] duration metric: took 6.183914ms for default service account to be created ...
	I1008 23:00:38.636950  200735 system_pods.go:116] waiting for k8s-apps to be running ...
	I1008 23:00:38.641529  200735 system_pods.go:86] 8 kube-system pods found
	I1008 23:00:38.641561  200735 system_pods.go:89] "coredns-66bc5c9577-9xx2v" [6311a0df-659e-42b5-a6ea-a6802aa5c5bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1008 23:00:38.641570  200735 system_pods.go:89] "etcd-default-k8s-diff-port-779490" [62e5779c-22cb-4cd3-adc0-beb892438c09] Running
	I1008 23:00:38.641575  200735 system_pods.go:89] "kindnet-9vmvl" [7fddc70f-a214-4909-ae97-566094420ce0] Running
	I1008 23:00:38.641672  200735 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-779490" [12aff927-400d-4715-a332-4d98c8d68745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:00:38.641691  200735 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-779490" [91db7f5f-fb48-4fe7-a10f-a3537bf731b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:00:38.641703  200735 system_pods.go:89] "kube-proxy-jrvxc" [cbffb55c-72e0-4086-b82a-f59db471adf4] Running
	I1008 23:00:38.641710  200735 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-779490" [b720244b-d1a3-4e3e-8eec-6e9f1df892de] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:00:38.641719  200735 system_pods.go:89] "storage-provisioner" [45961cee-2d6e-4219-bff8-34050548a8b0] Running
	I1008 23:00:38.641727  200735 system_pods.go:126] duration metric: took 4.769699ms to wait for k8s-apps to be running ...
	I1008 23:00:38.641752  200735 system_svc.go:44] waiting for kubelet service to be running ....
	I1008 23:00:38.641843  200735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:00:38.657309  200735 system_svc.go:56] duration metric: took 15.563712ms WaitForService to wait for kubelet
	I1008 23:00:38.657341  200735 kubeadm.go:586] duration metric: took 11.152818203s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:00:38.657392  200735 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:00:38.660817  200735 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:00:38.660857  200735 node_conditions.go:123] node cpu capacity is 2
	I1008 23:00:38.660900  200735 node_conditions.go:105] duration metric: took 3.495048ms to run NodePressure ...
	I1008 23:00:38.660913  200735 start.go:241] waiting for startup goroutines ...
	I1008 23:00:38.660925  200735 start.go:246] waiting for cluster config update ...
	I1008 23:00:38.660937  200735 start.go:255] writing updated cluster config ...
	I1008 23:00:38.661285  200735 ssh_runner.go:195] Run: rm -f paused
	I1008 23:00:38.665450  200735 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:00:38.681495  200735 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	W1008 23:00:40.702946  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:41.192108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.194681  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:45.689665  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:43.188107  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:45.195152  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:47.694917  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:50.202214  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:47.201882  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:49.202683  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:51.246618  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:52.690218  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:55.188303  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:53.690293  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:56.191657  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:00:57.694147  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:00.215108  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:00:58.688765  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:00.690867  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:02.690268  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:05.191132  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:03.190806  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:05.687338  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	W1008 23:01:07.691307  200735 pod_ready.go:104] pod "coredns-66bc5c9577-9xx2v" is not "Ready", error: <nil>
	I1008 23:01:09.189198  200735 pod_ready.go:94] pod "coredns-66bc5c9577-9xx2v" is "Ready"
	I1008 23:01:09.189221  200735 pod_ready.go:86] duration metric: took 30.507687365s for pod "coredns-66bc5c9577-9xx2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.193878  200735 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.198549  200735 pod_ready.go:94] pod "etcd-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.198580  200735 pod_ready.go:86] duration metric: took 4.672663ms for pod "etcd-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.202726  200735 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.216341  200735 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.216428  200735 pod_ready.go:86] duration metric: took 13.672156ms for pod "kube-apiserver-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.221298  200735 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.385313  200735 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:09.385345  200735 pod_ready.go:86] duration metric: took 164.020409ms for pod "kube-controller-manager-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.585312  200735 pod_ready.go:83] waiting for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:09.986012  200735 pod_ready.go:94] pod "kube-proxy-jrvxc" is "Ready"
	I1008 23:01:09.986041  200735 pod_ready.go:86] duration metric: took 400.698358ms for pod "kube-proxy-jrvxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.190147  200735 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587493  200735 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-779490" is "Ready"
	I1008 23:01:10.587525  200735 pod_ready.go:86] duration metric: took 397.349388ms for pod "kube-scheduler-default-k8s-diff-port-779490" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:10.587538  200735 pod_ready.go:40] duration metric: took 31.922052481s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:10.662421  200735 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:10.665744  200735 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-779490" cluster and "default" namespace by default
	W1008 23:01:07.689010  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:09.689062  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	W1008 23:01:11.693197  200074 pod_ready.go:104] pod "coredns-66bc5c9577-s7kcb" is not "Ready", error: <nil>
	I1008 23:01:12.189762  200074 pod_ready.go:94] pod "coredns-66bc5c9577-s7kcb" is "Ready"
	I1008 23:01:12.189792  200074 pod_ready.go:86] duration metric: took 35.506963864s for pod "coredns-66bc5c9577-s7kcb" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.192723  200074 pod_ready.go:83] waiting for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.197407  200074 pod_ready.go:94] pod "etcd-embed-certs-825429" is "Ready"
	I1008 23:01:12.197430  200074 pod_ready.go:86] duration metric: took 4.678735ms for pod "etcd-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.200027  200074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.204611  200074 pod_ready.go:94] pod "kube-apiserver-embed-certs-825429" is "Ready"
	I1008 23:01:12.204642  200074 pod_ready.go:86] duration metric: took 4.593655ms for pod "kube-apiserver-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.206885  200074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.387130  200074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-825429" is "Ready"
	I1008 23:01:12.387178  200074 pod_ready.go:86] duration metric: took 180.247707ms for pod "kube-controller-manager-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.587705  200074 pod_ready.go:83] waiting for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:12.987048  200074 pod_ready.go:94] pod "kube-proxy-86wtc" is "Ready"
	I1008 23:01:12.987076  200074 pod_ready.go:86] duration metric: took 399.301634ms for pod "kube-proxy-86wtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.187216  200074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587259  200074 pod_ready.go:94] pod "kube-scheduler-embed-certs-825429" is "Ready"
	I1008 23:01:13.587290  200074 pod_ready.go:86] duration metric: took 400.047489ms for pod "kube-scheduler-embed-certs-825429" in "kube-system" namespace to be "Ready" or be gone ...
	I1008 23:01:13.587304  200074 pod_ready.go:40] duration metric: took 36.916992323s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1008 23:01:13.655798  200074 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:01:13.659151  200074 out.go:179] * Done! kubectl is now configured to use "embed-certs-825429" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.733112288Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6731a31d-66c5-40bc-a51b-07aea9973a4d name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.734409118Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=4890a075-bb2e-4f7c-a508-a1a983c7abe7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.734655964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.743728015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.744071405Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/15330773ab5d098777ca9a161b7337acce8302e3dc668fc1eba96cdb3e15d2e3/merged/etc/passwd: no such file or directory"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.744175119Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/15330773ab5d098777ca9a161b7337acce8302e3dc668fc1eba96cdb3e15d2e3/merged/etc/group: no such file or directory"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.744516432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.761137207Z" level=info msg="Created container 12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee: kube-system/storage-provisioner/storage-provisioner" id=4890a075-bb2e-4f7c-a508-a1a983c7abe7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.762300842Z" level=info msg="Starting container: 12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee" id=4edbbe42-8a43-4ba3-92b9-3a56b9ee35b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:01:05 embed-certs-825429 crio[654]: time="2025-10-08T23:01:05.763859263Z" level=info msg="Started container" PID=1653 containerID=12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee description=kube-system/storage-provisioner/storage-provisioner id=4edbbe42-8a43-4ba3-92b9-3a56b9ee35b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f237529722e28e84d3fcd2fe897a1a246519233cbecd9c8e6c1e0b704ed6a207
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.048923037Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.057504021Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.057748636Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.057792837Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.061708789Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.061745162Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.061768867Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.065388339Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.065426501Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.065452356Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.069018895Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.069058814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.069085103Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.072945292Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 08 23:01:15 embed-certs-825429 crio[654]: time="2025-10-08T23:01:15.072983733Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	12860fa60b2b6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   f237529722e28       storage-provisioner                          kube-system
	e36f057891620       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago       Exited              dashboard-metrics-scraper   2                   ed850752d4fc0       dashboard-metrics-scraper-6ffb444bf9-vlzgh   kubernetes-dashboard
	a0ca50beda48e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   d1f9c825ea729       kubernetes-dashboard-855c9754f9-449f2        kubernetes-dashboard
	1713b13b43200       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   dd73a51124473       busybox                                      default
	1f4b81ea4020a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   2ad91090b1a14       kindnet-kjmsw                                kube-system
	c0fdc682f025c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   f237529722e28       storage-provisioner                          kube-system
	2459fd3ba9053       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   f14c118ee24f8       coredns-66bc5c9577-s7kcb                     kube-system
	af62cf6b338b2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   82a1216eebda5       kube-proxy-86wtc                             kube-system
	55041cc30a387       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   78d9dcc061217       kube-controller-manager-embed-certs-825429   kube-system
	22eefec3ff76d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   601b04229c52a       etcd-embed-certs-825429                      kube-system
	2b4397a485127       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   249e597d2e4ef       kube-scheduler-embed-certs-825429            kube-system
	a4d4c06603233       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   20909c08a76c8       kube-apiserver-embed-certs-825429            kube-system
	
	
	==> coredns [2459fd3ba9053672ada5673a83f9c59ab57ebd0c4944a857bd3a952bcd5f7d2f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58948 - 44797 "HINFO IN 8650173283547477972.6731509134613730560. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029570413s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-825429
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-825429
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=embed-certs-825429
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T22_58_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 22:58:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-825429
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 23:01:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:58:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Oct 2025 23:01:03 +0000   Wed, 08 Oct 2025 22:59:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-825429
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 b956126660f04039803926356103464c
	  System UUID:                9bcebe6b-6a1d-4fec-b0e0-57daefae99b1
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-s7kcb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 etcd-embed-certs-825429                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m36s
	  kube-system                 kindnet-kjmsw                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-embed-certs-825429             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-embed-certs-825429    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-86wtc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-embed-certs-825429             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vlzgh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-449f2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m27s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Warning  CgroupV1                 2m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m45s (x8 over 2m46s)  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m45s (x8 over 2m46s)  kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m45s (x8 over 2m46s)  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s                  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s                  kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s                  kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m31s                  node-controller  Node embed-certs-825429 event: Registered Node embed-certs-825429 in Controller
	  Normal   NodeReady                107s                   kubelet          Node embed-certs-825429 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node embed-certs-825429 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node embed-certs-825429 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node embed-certs-825429 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node embed-certs-825429 event: Registered Node embed-certs-825429 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:33] overlayfs: idmapped layers are currently not supported
	[ +29.139481] overlayfs: idmapped layers are currently not supported
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:00] overlayfs: idmapped layers are currently not supported
	[  +1.568442] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [22eefec3ff76db05811d4a86718d52b7b055ea7d7d671f8dbebc79eb5b28c061] <==
	{"level":"warn","ts":"2025-10-08T23:00:30.204657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.225862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.265940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.326322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.381761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.394317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.433840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.469836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.505961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.546202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.592434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.617782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.651112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.687556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.718620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.744667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.781043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.815642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.881824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.916012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:30.980112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.033020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.066427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.088758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:00:31.162880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39026","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:01:32 up  1:44,  0 user,  load average: 3.82, 2.56, 2.04
	Linux embed-certs-825429 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f4b81ea4020a6156308c39a4d711c3cae16849618c6cd4a1f14b6b14a1d2393] <==
	I1008 23:00:34.720046       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 23:00:34.720288       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1008 23:00:34.720440       1 main.go:148] setting mtu 1500 for CNI 
	I1008 23:00:34.720453       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 23:00:34.720470       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T23:00:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 23:00:35.048950       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 23:00:35.049052       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 23:00:35.049091       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 23:00:35.110303       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1008 23:01:05.047109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1008 23:01:05.050660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1008 23:01:05.050660       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1008 23:01:05.059206       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1008 23:01:06.649467       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1008 23:01:06.649503       1 metrics.go:72] Registering metrics
	I1008 23:01:06.649577       1 controller.go:711] "Syncing nftables rules"
	I1008 23:01:15.048498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 23:01:15.048616       1 main.go:301] handling current node
	I1008 23:01:25.053698       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1008 23:01:25.053799       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4d4c06603233f6d3f0466d405ac5015842b9b9a3ddd88eaeb71a429911303a0] <==
	I1008 23:00:33.204921       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1008 23:00:33.205045       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1008 23:00:33.205188       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 23:00:33.213933       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 23:00:33.249276       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:00:33.253347       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 23:00:33.253363       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 23:00:33.261174       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 23:00:33.261260       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 23:00:33.272672       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 23:00:33.272898       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1008 23:00:33.276115       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1008 23:00:33.280402       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 23:00:33.341065       1 cache.go:39] Caches are synced for autoregister controller
	I1008 23:00:33.407446       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 23:00:33.575525       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 23:00:35.932416       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 23:00:36.070784       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 23:00:36.196754       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 23:00:36.225205       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 23:00:36.393242       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.229.96"}
	I1008 23:00:36.431072       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.111.251"}
	I1008 23:00:38.429912       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 23:00:38.677328       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 23:00:38.813533       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [55041cc30a387a17c3c9cf147c52e73bd7ccd0183b6e8e9db71a9640bc8f2175] <==
	I1008 23:00:38.275286       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 23:00:38.275318       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 23:00:38.275398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1008 23:00:38.275475       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1008 23:00:38.275534       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1008 23:00:38.275613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1008 23:00:38.275260       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 23:00:38.279481       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1008 23:00:38.279537       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1008 23:00:38.279568       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1008 23:00:38.279573       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1008 23:00:38.279579       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1008 23:00:38.291948       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 23:00:38.292236       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 23:00:38.309309       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 23:00:38.313287       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 23:00:38.320137       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1008 23:00:38.321323       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 23:00:38.321729       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 23:00:38.321772       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 23:00:38.330557       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 23:00:38.348198       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:00:38.368059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:00:38.368088       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 23:00:38.368097       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [af62cf6b338b21d6b9480139b1d489c4649cd4ade44f1ef4f7af892960632f3d] <==
	I1008 23:00:35.075435       1 server_linux.go:53] "Using iptables proxy"
	I1008 23:00:35.509745       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 23:00:35.689842       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 23:00:35.689875       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1008 23:00:35.689941       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 23:00:35.867302       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 23:00:35.867360       1 server_linux.go:132] "Using iptables Proxier"
	I1008 23:00:35.880487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 23:00:35.880908       1 server.go:527] "Version info" version="v1.34.1"
	I1008 23:00:35.881145       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:35.887159       1 config.go:200] "Starting service config controller"
	I1008 23:00:35.887239       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 23:00:35.887716       1 config.go:106] "Starting endpoint slice config controller"
	I1008 23:00:35.900945       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 23:00:35.894959       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 23:00:35.901090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 23:00:35.895701       1 config.go:309] "Starting node config controller"
	I1008 23:00:35.901101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 23:00:35.901107       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 23:00:35.988757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 23:00:36.002093       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 23:00:36.002194       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2b4397a485127543aacc4c006f8eda3f76ef0a1494d94a217bad28ca9644dec3] <==
	I1008 23:00:28.486660       1 serving.go:386] Generated self-signed cert in-memory
	W1008 23:00:33.140826       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1008 23:00:33.143980       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1008 23:00:33.144025       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1008 23:00:33.144037       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1008 23:00:33.265261       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 23:00:33.265293       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:00:33.291490       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 23:00:33.291582       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:33.291599       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:00:33.291615       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 23:00:33.398024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 23:00:38 embed-certs-825429 kubelet[781]: I1008 23:00:38.952829     781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c76l\" (UniqueName: \"kubernetes.io/projected/3f6ecdcd-1eed-428f-85ed-68596e1d32da-kube-api-access-8c76l\") pod \"dashboard-metrics-scraper-6ffb444bf9-vlzgh\" (UID: \"3f6ecdcd-1eed-428f-85ed-68596e1d32da\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh"
	Oct 08 23:00:39 embed-certs-825429 kubelet[781]: W1008 23:00:39.202311     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/crio-ed850752d4fc0b8e82597c844895b885487f9f0affb0d019bacd7e286d3f5192 WatchSource:0}: Error finding container ed850752d4fc0b8e82597c844895b885487f9f0affb0d019bacd7e286d3f5192: Status 404 returned error can't find the container with id ed850752d4fc0b8e82597c844895b885487f9f0affb0d019bacd7e286d3f5192
	Oct 08 23:00:39 embed-certs-825429 kubelet[781]: W1008 23:00:39.221133     781 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3489ded6521eae1018fcd92f81b847fbcbbb2221d9db76cd8e7efb150c72a687/crio-d1f9c825ea7290a2b6a521dd94f9aed48ea62f400ef0ecac48d44ad545463bf0 WatchSource:0}: Error finding container d1f9c825ea7290a2b6a521dd94f9aed48ea62f400ef0ecac48d44ad545463bf0: Status 404 returned error can't find the container with id d1f9c825ea7290a2b6a521dd94f9aed48ea62f400ef0ecac48d44ad545463bf0
	Oct 08 23:00:41 embed-certs-825429 kubelet[781]: I1008 23:00:41.666360     781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 08 23:00:44 embed-certs-825429 kubelet[781]: I1008 23:00:44.639449     781 scope.go:117] "RemoveContainer" containerID="b8618432eeec0450cbffed5126ab8e7591b525ce49d1d4b1235f818ca747fff0"
	Oct 08 23:00:45 embed-certs-825429 kubelet[781]: I1008 23:00:45.645812     781 scope.go:117] "RemoveContainer" containerID="b8618432eeec0450cbffed5126ab8e7591b525ce49d1d4b1235f818ca747fff0"
	Oct 08 23:00:45 embed-certs-825429 kubelet[781]: I1008 23:00:45.646767     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:00:45 embed-certs-825429 kubelet[781]: E1008 23:00:45.646926     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:00:46 embed-certs-825429 kubelet[781]: I1008 23:00:46.650818     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:00:46 embed-certs-825429 kubelet[781]: E1008 23:00:46.654443     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:00:49 embed-certs-825429 kubelet[781]: I1008 23:00:49.139263     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:00:49 embed-certs-825429 kubelet[781]: E1008 23:00:49.139444     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.280122     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.720019     781 scope.go:117] "RemoveContainer" containerID="7594d28376a3bbc9b5d0ff9ab210e875a9fa3deba8e8ccf23792156df0b259b7"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.720431     781 scope.go:117] "RemoveContainer" containerID="e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: E1008 23:01:02.720616     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:02 embed-certs-825429 kubelet[781]: I1008 23:01:02.743011     781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-449f2" podStartSLOduration=12.099079102 podStartE2EDuration="24.742991143s" podCreationTimestamp="2025-10-08 23:00:38 +0000 UTC" firstStartedPulling="2025-10-08 23:00:39.227767525 +0000 UTC m=+15.263283873" lastFinishedPulling="2025-10-08 23:00:51.871679566 +0000 UTC m=+27.907195914" observedRunningTime="2025-10-08 23:00:52.715381506 +0000 UTC m=+28.750897879" watchObservedRunningTime="2025-10-08 23:01:02.742991143 +0000 UTC m=+38.778507499"
	Oct 08 23:01:05 embed-certs-825429 kubelet[781]: I1008 23:01:05.731268     781 scope.go:117] "RemoveContainer" containerID="c0fdc682f025c7d581ec1e76c0b8316090b7b1ba1c04a73b7d57e39600677e81"
	Oct 08 23:01:09 embed-certs-825429 kubelet[781]: I1008 23:01:09.140457     781 scope.go:117] "RemoveContainer" containerID="e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	Oct 08 23:01:09 embed-certs-825429 kubelet[781]: E1008 23:01:09.141075     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:21 embed-certs-825429 kubelet[781]: I1008 23:01:21.280064     781 scope.go:117] "RemoveContainer" containerID="e36f057891620b982eaccc9664bb49f05a3544bd09b31a8a03e27c78982d29d7"
	Oct 08 23:01:21 embed-certs-825429 kubelet[781]: E1008 23:01:21.280286     781 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vlzgh_kubernetes-dashboard(3f6ecdcd-1eed-428f-85ed-68596e1d32da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vlzgh" podUID="3f6ecdcd-1eed-428f-85ed-68596e1d32da"
	Oct 08 23:01:25 embed-certs-825429 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 23:01:26 embed-certs-825429 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 23:01:26 embed-certs-825429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a0ca50beda48eb593a29295444164c508e7747c30dcd8eacd75951f772dc6b39] <==
	2025/10/08 23:00:51 Using namespace: kubernetes-dashboard
	2025/10/08 23:00:51 Using in-cluster config to connect to apiserver
	2025/10/08 23:00:51 Using secret token for csrf signing
	2025/10/08 23:00:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/08 23:00:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/08 23:00:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/08 23:00:51 Generating JWE encryption key
	2025/10/08 23:00:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/08 23:00:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/08 23:00:52 Initializing JWE encryption key from synchronized object
	2025/10/08 23:00:52 Creating in-cluster Sidecar client
	2025/10/08 23:00:52 Serving insecurely on HTTP port: 9090
	2025/10/08 23:00:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:01:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/08 23:00:51 Starting overwatch
	
	
	==> storage-provisioner [12860fa60b2b652a6c8a7e5e9783767703ce7c06c73340d67f8cd083840a93ee] <==
	I1008 23:01:05.776970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 23:01:05.791827       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 23:01:05.791887       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1008 23:01:05.794235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:09.250133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:13.511542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:17.111128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:20.165060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:23.186914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:23.192460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:23.192610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 23:01:23.192790       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-825429_aa9a6673-8932-43ab-8ada-b617def1371c!
	I1008 23:01:23.193878       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"deb8d6fa-4d23-4078-b8a3-474c7c204563", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-825429_aa9a6673-8932-43ab-8ada-b617def1371c became leader
	W1008 23:01:23.201930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:23.211355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1008 23:01:23.293811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-825429_aa9a6673-8932-43ab-8ada-b617def1371c!
	W1008 23:01:25.214574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:25.220322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:27.223836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:27.235178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:29.246214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:29.260947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:31.264083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1008 23:01:31.272885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c0fdc682f025c7d581ec1e76c0b8316090b7b1ba1c04a73b7d57e39600677e81] <==
	I1008 23:00:35.091281       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1008 23:01:05.093083       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825429 -n embed-certs-825429
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-825429 -n embed-certs-825429: exit status 2 (454.015652ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-825429 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-598445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-598445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (283.016647ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:02:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-598445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-598445
helpers_test.go:243: (dbg) docker inspect newest-cni-598445:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0",
	        "Created": "2025-10-08T23:01:43.562370907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208167,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T23:01:43.696650993Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/hosts",
	        "LogPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0-json.log",
	        "Name": "/newest-cni-598445",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-598445:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-598445",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0",
	                "LowerDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-598445",
	                "Source": "/var/lib/docker/volumes/newest-cni-598445/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-598445",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-598445",
	                "name.minikube.sigs.k8s.io": "newest-cni-598445",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ba28bb12fc714a94c8bf885a8d7c4341c84a603244dddde86e87dcdda8f4e11",
	            "SandboxKey": "/var/run/docker/netns/3ba28bb12fc7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-598445": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:d2:84:b4:a8:83",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0b1ff28b0c97915ff48c8d0f7665a15b64c8eae67960eb9db0d077a1b90fb71",
	                    "EndpointID": "3c2eaa6b42cf8b54ed8c45cbcf38a9123155fc3ac2812ba07f6968c3a95d345b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-598445",
	                        "d0d27dc20f53"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-598445 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-598445 logs -n 25: (1.161290003s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p force-systemd-flag-385382                                                                                                                                                                                                                  │ force-systemd-flag-385382    │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                                                                                          │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p no-preload-939665                                                                                                                                                                                                                          │ no-preload-939665            │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ delete  │ -p disable-driver-mounts-036919                                                                                                                                                                                                               │ disable-driver-mounts-036919 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:58 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	│ stop    │ -p embed-certs-825429 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ stop    │ -p default-k8s-diff-port-779490 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-779490 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-779490 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ image   │ embed-certs-825429 image list --format=json                                                                                                                                                                                                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p embed-certs-825429 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-779490                                                                                                                                                                                                               │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ delete  │ -p embed-certs-825429                                                                                                                                                                                                                         │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ delete  │ -p default-k8s-diff-port-779490                                                                                                                                                                                                               │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ start   │ -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:02 UTC │
	│ delete  │ -p embed-certs-825429                                                                                                                                                                                                                         │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ start   │ -p auto-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-840929                  │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-598445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 23:01:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 23:01:36.759173  207624 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:01:36.759371  207624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:01:36.759378  207624 out.go:374] Setting ErrFile to fd 2...
	I1008 23:01:36.759383  207624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:01:36.759673  207624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:01:36.760108  207624 out.go:368] Setting JSON to false
	I1008 23:01:36.768279  207624 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6247,"bootTime":1759958250,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 23:01:36.768360  207624 start.go:141] virtualization:  
	I1008 23:01:36.774375  207624 out.go:179] * [auto-840929] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 23:01:36.777368  207624 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 23:01:36.777424  207624 notify.go:220] Checking for updates...
	I1008 23:01:36.783518  207624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 23:01:36.786827  207624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:01:36.790015  207624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 23:01:36.793782  207624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 23:01:36.796872  207624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 23:01:36.803039  207624 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:01:36.803175  207624 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 23:01:36.848422  207624 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 23:01:36.848569  207624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:01:36.941445  207624 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-10-08 23:01:36.93233206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:01:36.941557  207624 docker.go:318] overlay module found
	I1008 23:01:36.944854  207624 out.go:179] * Using the docker driver based on user configuration
	I1008 23:01:36.947746  207624 start.go:305] selected driver: docker
	I1008 23:01:36.947768  207624 start.go:925] validating driver "docker" against <nil>
	I1008 23:01:36.947795  207624 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 23:01:36.948502  207624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:01:37.040211  207624 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-10-08 23:01:37.030685274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:01:37.040376  207624 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 23:01:37.040737  207624 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 23:01:37.043787  207624 out.go:179] * Using Docker driver with root privileges
	I1008 23:01:37.046642  207624 cni.go:84] Creating CNI manager for ""
	I1008 23:01:37.046726  207624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:01:37.046744  207624 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 23:01:37.046839  207624 start.go:349] cluster config:
	{Name:auto-840929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-840929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1008 23:01:37.050026  207624 out.go:179] * Starting "auto-840929" primary control-plane node in "auto-840929" cluster
	I1008 23:01:37.052865  207624 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 23:01:37.055837  207624 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 23:01:37.058754  207624 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:01:37.058811  207624 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 23:01:37.058822  207624 cache.go:58] Caching tarball of preloaded images
	I1008 23:01:37.058838  207624 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 23:01:37.058906  207624 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 23:01:37.058916  207624 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 23:01:37.059034  207624 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/config.json ...
	I1008 23:01:37.059069  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/config.json: {Name:mkc31782fed046c08e48b1d7ba5db6a531795a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:37.082237  207624 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 23:01:37.082258  207624 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 23:01:37.082272  207624 cache.go:232] Successfully downloaded all kic artifacts
	I1008 23:01:37.082295  207624 start.go:360] acquireMachinesLock for auto-840929: {Name:mk6dc1497bbf9a22904913c1e1a851dad7a88722 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 23:01:37.082389  207624 start.go:364] duration metric: took 78.36µs to acquireMachinesLock for "auto-840929"
	I1008 23:01:37.082413  207624 start.go:93] Provisioning new machine with config: &{Name:auto-840929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-840929 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:01:37.082488  207624 start.go:125] createHost starting for "" (driver="docker")
	I1008 23:01:35.163134  207155 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 23:01:35.163382  207155 start.go:159] libmachine.API.Create for "newest-cni-598445" (driver="docker")
	I1008 23:01:35.163419  207155 client.go:168] LocalClient.Create starting
	I1008 23:01:35.163501  207155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 23:01:35.163540  207155 main.go:141] libmachine: Decoding PEM data...
	I1008 23:01:35.163557  207155 main.go:141] libmachine: Parsing certificate...
	I1008 23:01:35.163609  207155 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 23:01:35.163634  207155 main.go:141] libmachine: Decoding PEM data...
	I1008 23:01:35.163645  207155 main.go:141] libmachine: Parsing certificate...
	I1008 23:01:35.164038  207155 cli_runner.go:164] Run: docker network inspect newest-cni-598445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 23:01:35.185987  207155 cli_runner.go:211] docker network inspect newest-cni-598445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 23:01:35.186073  207155 network_create.go:284] running [docker network inspect newest-cni-598445] to gather additional debugging logs...
	I1008 23:01:35.186091  207155 cli_runner.go:164] Run: docker network inspect newest-cni-598445
	W1008 23:01:35.206728  207155 cli_runner.go:211] docker network inspect newest-cni-598445 returned with exit code 1
	I1008 23:01:35.206756  207155 network_create.go:287] error running [docker network inspect newest-cni-598445]: docker network inspect newest-cni-598445: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-598445 not found
	I1008 23:01:35.206770  207155 network_create.go:289] output of [docker network inspect newest-cni-598445]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-598445 not found
	
	** /stderr **
	I1008 23:01:35.206859  207155 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:01:35.228226  207155 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 23:01:35.228591  207155 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 23:01:35.228875  207155 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 23:01:35.229434  207155 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c72f626705cd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:c4:86:26:e3:9b} reservation:<nil>}
	I1008 23:01:35.229995  207155 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f4cc0}
	I1008 23:01:35.230055  207155 network_create.go:124] attempt to create docker network newest-cni-598445 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1008 23:01:35.230181  207155 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-598445 newest-cni-598445
	I1008 23:01:35.343203  207155 network_create.go:108] docker network newest-cni-598445 192.168.85.0/24 created
	I1008 23:01:35.343238  207155 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-598445" container
	I1008 23:01:35.343323  207155 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 23:01:35.362633  207155 cli_runner.go:164] Run: docker volume create newest-cni-598445 --label name.minikube.sigs.k8s.io=newest-cni-598445 --label created_by.minikube.sigs.k8s.io=true
	I1008 23:01:35.383019  207155 oci.go:103] Successfully created a docker volume newest-cni-598445
	I1008 23:01:35.383128  207155 cli_runner.go:164] Run: docker run --rm --name newest-cni-598445-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-598445 --entrypoint /usr/bin/test -v newest-cni-598445:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 23:01:36.578870  207155 cli_runner.go:217] Completed: docker run --rm --name newest-cni-598445-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-598445 --entrypoint /usr/bin/test -v newest-cni-598445:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.19570724s)
	I1008 23:01:36.578899  207155 oci.go:107] Successfully prepared a docker volume newest-cni-598445
	I1008 23:01:36.578918  207155 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:01:36.578936  207155 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 23:01:36.579004  207155 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-598445:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 23:01:37.085869  207624 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1008 23:01:37.086104  207624 start.go:159] libmachine.API.Create for "auto-840929" (driver="docker")
	I1008 23:01:37.086146  207624 client.go:168] LocalClient.Create starting
	I1008 23:01:37.086224  207624 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem
	I1008 23:01:37.086268  207624 main.go:141] libmachine: Decoding PEM data...
	I1008 23:01:37.086286  207624 main.go:141] libmachine: Parsing certificate...
	I1008 23:01:37.086340  207624 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem
	I1008 23:01:37.086358  207624 main.go:141] libmachine: Decoding PEM data...
	I1008 23:01:37.086368  207624 main.go:141] libmachine: Parsing certificate...
	I1008 23:01:37.086714  207624 cli_runner.go:164] Run: docker network inspect auto-840929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1008 23:01:37.101403  207624 cli_runner.go:211] docker network inspect auto-840929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1008 23:01:37.101480  207624 network_create.go:284] running [docker network inspect auto-840929] to gather additional debugging logs...
	I1008 23:01:37.101498  207624 cli_runner.go:164] Run: docker network inspect auto-840929
	W1008 23:01:37.115665  207624 cli_runner.go:211] docker network inspect auto-840929 returned with exit code 1
	I1008 23:01:37.115692  207624 network_create.go:287] error running [docker network inspect auto-840929]: docker network inspect auto-840929: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-840929 not found
	I1008 23:01:37.115705  207624 network_create.go:289] output of [docker network inspect auto-840929]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-840929 not found
	
	** /stderr **
	I1008 23:01:37.115790  207624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:01:37.133027  207624 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
	I1008 23:01:37.133375  207624 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-63e5a240d1c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:c1:7e:c4:0f:80} reservation:<nil>}
	I1008 23:01:37.133760  207624 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b4468d57db2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:58:be:ff:ae:01} reservation:<nil>}
	I1008 23:01:37.134178  207624 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001971260}
	I1008 23:01:37.134197  207624 network_create.go:124] attempt to create docker network auto-840929 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1008 23:01:37.134273  207624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-840929 auto-840929
	I1008 23:01:37.196783  207624 network_create.go:108] docker network auto-840929 192.168.76.0/24 created
	I1008 23:01:37.196813  207624 kic.go:121] calculated static IP "192.168.76.2" for the "auto-840929" container
	I1008 23:01:37.196882  207624 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1008 23:01:37.212310  207624 cli_runner.go:164] Run: docker volume create auto-840929 --label name.minikube.sigs.k8s.io=auto-840929 --label created_by.minikube.sigs.k8s.io=true
	I1008 23:01:37.244372  207624 oci.go:103] Successfully created a docker volume auto-840929
	I1008 23:01:37.244476  207624 cli_runner.go:164] Run: docker run --rm --name auto-840929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-840929 --entrypoint /usr/bin/test -v auto-840929:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1008 23:01:38.535262  207624 cli_runner.go:217] Completed: docker run --rm --name auto-840929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-840929 --entrypoint /usr/bin/test -v auto-840929:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (1.29075188s)
	I1008 23:01:38.535295  207624 oci.go:107] Successfully prepared a docker volume auto-840929
	I1008 23:01:38.535329  207624 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:01:38.535347  207624 kic.go:194] Starting extracting preloaded images to volume ...
	I1008 23:01:38.535412  207624 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-840929:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1008 23:01:43.438638  207155 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-598445:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (6.859590233s)
	I1008 23:01:43.438674  207155 kic.go:203] duration metric: took 6.859733496s to extract preloaded images to volume ...
	W1008 23:01:43.438824  207155 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 23:01:43.438935  207155 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 23:01:43.543437  207155 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-598445 --name newest-cni-598445 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-598445 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-598445 --network newest-cni-598445 --ip 192.168.85.2 --volume newest-cni-598445:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 23:01:44.154628  207155 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Running}}
	I1008 23:01:44.191309  207155 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:01:44.253973  207155 cli_runner.go:164] Run: docker exec newest-cni-598445 stat /var/lib/dpkg/alternatives/iptables
	I1008 23:01:44.336946  207155 oci.go:144] the created container "newest-cni-598445" has a running status.
	I1008 23:01:44.336985  207155 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa...
	I1008 23:01:43.435529  207624 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-840929:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.90008075s)
	I1008 23:01:43.435560  207624 kic.go:203] duration metric: took 4.900208669s to extract preloaded images to volume ...
	W1008 23:01:43.435701  207624 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1008 23:01:43.435814  207624 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1008 23:01:43.529521  207624 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-840929 --name auto-840929 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-840929 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-840929 --network auto-840929 --ip 192.168.76.2 --volume auto-840929:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1008 23:01:43.919031  207624 cli_runner.go:164] Run: docker container inspect auto-840929 --format={{.State.Running}}
	I1008 23:01:43.938928  207624 cli_runner.go:164] Run: docker container inspect auto-840929 --format={{.State.Status}}
	I1008 23:01:43.972029  207624 cli_runner.go:164] Run: docker exec auto-840929 stat /var/lib/dpkg/alternatives/iptables
	I1008 23:01:44.051916  207624 oci.go:144] the created container "auto-840929" has a running status.
	I1008 23:01:44.051962  207624 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa...
	I1008 23:01:45.312370  207624 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 23:01:45.424672  207624 cli_runner.go:164] Run: docker container inspect auto-840929 --format={{.State.Status}}
	I1008 23:01:45.489866  207624 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 23:01:45.489892  207624 kic_runner.go:114] Args: [docker exec --privileged auto-840929 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 23:01:45.669853  207624 cli_runner.go:164] Run: docker container inspect auto-840929 --format={{.State.Status}}
	I1008 23:01:45.727699  207624 machine.go:93] provisionDockerMachine start ...
	I1008 23:01:45.727796  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:45.782029  207624 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:45.782358  207624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1008 23:01:45.782368  207624 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:01:46.077863  207624 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-840929
	
	I1008 23:01:46.077885  207624 ubuntu.go:182] provisioning hostname "auto-840929"
	I1008 23:01:46.077946  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:46.135770  207624 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:46.136196  207624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1008 23:01:46.136232  207624 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-840929 && echo "auto-840929" | sudo tee /etc/hostname
	I1008 23:01:46.387323  207624 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-840929
	
	I1008 23:01:46.387479  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:46.414195  207624 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:46.414511  207624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1008 23:01:46.414528  207624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-840929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-840929/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-840929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:01:46.618134  207624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:01:46.618164  207624 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:01:46.618205  207624 ubuntu.go:190] setting up certificates
	I1008 23:01:46.618215  207624 provision.go:84] configureAuth start
	I1008 23:01:46.618298  207624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-840929
	I1008 23:01:46.648791  207624 provision.go:143] copyHostCerts
	I1008 23:01:46.648861  207624 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:01:46.648874  207624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:01:46.648948  207624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:01:46.649051  207624 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:01:46.649063  207624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:01:46.649097  207624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:01:46.649164  207624 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:01:46.649170  207624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:01:46.650479  207624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:01:46.650597  207624 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.auto-840929 san=[127.0.0.1 192.168.76.2 auto-840929 localhost minikube]
	I1008 23:01:47.076344  207624 provision.go:177] copyRemoteCerts
	I1008 23:01:47.076454  207624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:01:47.076550  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:47.100795  207624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa Username:docker}
	I1008 23:01:47.206420  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:01:47.232292  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 23:01:47.253688  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:01:47.274377  207624 provision.go:87] duration metric: took 656.132433ms to configureAuth
	I1008 23:01:47.274402  207624 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:01:47.274582  207624 config.go:182] Loaded profile config "auto-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:01:47.274690  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:47.292659  207624 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:47.292956  207624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1008 23:01:47.292970  207624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:01:47.668102  207624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:01:47.668136  207624 machine.go:96] duration metric: took 1.940418302s to provisionDockerMachine
	I1008 23:01:47.668157  207624 client.go:171] duration metric: took 10.582000607s to LocalClient.Create
	I1008 23:01:47.668172  207624 start.go:167] duration metric: took 10.582068243s to libmachine.API.Create "auto-840929"
	I1008 23:01:47.668191  207624 start.go:293] postStartSetup for "auto-840929" (driver="docker")
	I1008 23:01:47.668202  207624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:01:47.668324  207624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:01:47.668369  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:47.693084  207624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa Username:docker}
	I1008 23:01:47.805829  207624 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:01:47.809158  207624 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:01:47.809184  207624 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:01:47.809195  207624 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:01:47.809245  207624 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:01:47.809331  207624 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:01:47.809440  207624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:01:47.816992  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:01:47.835901  207624 start.go:296] duration metric: took 167.692963ms for postStartSetup
	I1008 23:01:47.836317  207624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-840929
	I1008 23:01:47.862535  207624 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/config.json ...
	I1008 23:01:47.863664  207624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:01:47.863720  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:47.889207  207624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa Username:docker}
	I1008 23:01:47.999450  207624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:01:48.014886  207624 start.go:128] duration metric: took 10.932371616s to createHost
	I1008 23:01:48.014910  207624 start.go:83] releasing machines lock for "auto-840929", held for 10.932513313s
	I1008 23:01:48.014988  207624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-840929
	I1008 23:01:48.036542  207624 ssh_runner.go:195] Run: cat /version.json
	I1008 23:01:48.036595  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:48.036899  207624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:01:48.036963  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:01:48.074033  207624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa Username:docker}
	I1008 23:01:48.083204  207624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa Username:docker}
	I1008 23:01:48.305419  207624 ssh_runner.go:195] Run: systemctl --version
	I1008 23:01:48.312406  207624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:01:48.355987  207624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:01:48.361056  207624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:01:48.361139  207624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:01:48.407360  207624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 23:01:48.407384  207624 start.go:495] detecting cgroup driver to use...
	I1008 23:01:48.407433  207624 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:01:48.407488  207624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:01:48.428078  207624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:01:48.444147  207624 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:01:48.444260  207624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:01:48.464395  207624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:01:48.487381  207624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:01:48.645573  207624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:01:48.810891  207624 docker.go:234] disabling docker service ...
	I1008 23:01:48.810994  207624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:01:48.842132  207624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:01:48.858310  207624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:01:49.015110  207624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:01:49.188079  207624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:01:49.216219  207624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:01:49.234470  207624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:01:49.234561  207624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.243942  207624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:01:49.244043  207624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.253992  207624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.264681  207624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.275161  207624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:01:49.283998  207624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.295683  207624 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.312322  207624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.321817  207624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:01:49.331899  207624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:01:49.340361  207624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:01:49.486805  207624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:01:49.635953  207624 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:01:49.636072  207624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:01:49.640417  207624 start.go:563] Will wait 60s for crictl version
	I1008 23:01:49.640546  207624 ssh_runner.go:195] Run: which crictl
	I1008 23:01:49.644287  207624 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:01:49.684273  207624 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:01:49.684386  207624 ssh_runner.go:195] Run: crio --version
	I1008 23:01:49.718286  207624 ssh_runner.go:195] Run: crio --version
	I1008 23:01:49.756702  207624 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:01:45.972401  207155 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1008 23:01:46.017765  207155 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:01:46.046555  207155 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1008 23:01:46.046574  207155 kic_runner.go:114] Args: [docker exec --privileged newest-cni-598445 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1008 23:01:46.119255  207155 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:01:46.153590  207155 machine.go:93] provisionDockerMachine start ...
	I1008 23:01:46.153760  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:46.188531  207155 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:46.188874  207155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1008 23:01:46.188883  207155 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:01:46.424717  207155 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-598445
	
	I1008 23:01:46.424743  207155 ubuntu.go:182] provisioning hostname "newest-cni-598445"
	I1008 23:01:46.424809  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:46.454837  207155 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:46.455147  207155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1008 23:01:46.455164  207155 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-598445 && echo "newest-cni-598445" | sudo tee /etc/hostname
	I1008 23:01:46.659311  207155 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-598445
	
	I1008 23:01:46.659412  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:46.697964  207155 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:46.698296  207155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1008 23:01:46.698318  207155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-598445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-598445/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-598445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:01:46.881996  207155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:01:46.882021  207155 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:01:46.882101  207155 ubuntu.go:190] setting up certificates
	I1008 23:01:46.882182  207155 provision.go:84] configureAuth start
	I1008 23:01:46.882267  207155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:01:46.904079  207155 provision.go:143] copyHostCerts
	I1008 23:01:46.904149  207155 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:01:46.904159  207155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:01:46.904225  207155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:01:46.904332  207155 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:01:46.904342  207155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:01:46.904365  207155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:01:46.904430  207155 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:01:46.904439  207155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:01:46.904460  207155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:01:46.904524  207155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.newest-cni-598445 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-598445]
	I1008 23:01:47.205107  207155 provision.go:177] copyRemoteCerts
	I1008 23:01:47.205183  207155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:01:47.205230  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:47.225828  207155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:01:47.330228  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 23:01:47.355917  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:01:47.377376  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:01:47.398027  207155 provision.go:87] duration metric: took 515.829554ms to configureAuth
	I1008 23:01:47.398053  207155 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:01:47.398240  207155 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:01:47.398347  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:47.418259  207155 main.go:141] libmachine: Using SSH client type: native
	I1008 23:01:47.418583  207155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1008 23:01:47.418616  207155 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:01:47.717780  207155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:01:47.717805  207155 machine.go:96] duration metric: took 1.564197856s to provisionDockerMachine
	I1008 23:01:47.717821  207155 client.go:171] duration metric: took 12.554390832s to LocalClient.Create
	I1008 23:01:47.717832  207155 start.go:167] duration metric: took 12.554449959s to libmachine.API.Create "newest-cni-598445"
	I1008 23:01:47.717839  207155 start.go:293] postStartSetup for "newest-cni-598445" (driver="docker")
	I1008 23:01:47.717848  207155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:01:47.717913  207155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:01:47.717958  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:47.742200  207155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:01:47.847954  207155 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:01:47.852039  207155 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:01:47.852067  207155 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:01:47.852079  207155 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:01:47.852142  207155 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:01:47.852225  207155 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:01:47.852324  207155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:01:47.862999  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:01:47.888323  207155 start.go:296] duration metric: took 170.469272ms for postStartSetup
	I1008 23:01:47.888682  207155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:01:47.909947  207155 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/config.json ...
	I1008 23:01:47.910215  207155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:01:47.910277  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:47.930371  207155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:01:48.034825  207155 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:01:48.041267  207155 start.go:128] duration metric: took 12.879874388s to createHost
	I1008 23:01:48.041292  207155 start.go:83] releasing machines lock for "newest-cni-598445", held for 12.880010463s
	I1008 23:01:48.041368  207155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:01:48.090827  207155 ssh_runner.go:195] Run: cat /version.json
	I1008 23:01:48.090901  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:48.091137  207155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:01:48.091197  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:01:48.143793  207155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:01:48.145197  207155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:01:48.246067  207155 ssh_runner.go:195] Run: systemctl --version
	I1008 23:01:48.346643  207155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:01:48.401193  207155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:01:48.408360  207155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:01:48.408429  207155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:01:48.453190  207155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1008 23:01:48.453275  207155 start.go:495] detecting cgroup driver to use...
	I1008 23:01:48.453344  207155 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:01:48.453428  207155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:01:48.475373  207155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:01:48.491873  207155 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:01:48.491989  207155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:01:48.510932  207155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:01:48.535232  207155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:01:48.722366  207155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:01:48.910780  207155 docker.go:234] disabling docker service ...
	I1008 23:01:48.910878  207155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:01:48.949257  207155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:01:48.964939  207155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:01:49.128815  207155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:01:49.292905  207155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:01:49.312017  207155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:01:49.329821  207155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:01:49.329913  207155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.341337  207155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:01:49.341436  207155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.351667  207155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.361263  207155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.370984  207155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:01:49.387456  207155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.399884  207155 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.426653  207155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:01:49.437191  207155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:01:49.444994  207155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:01:49.453241  207155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:01:49.599903  207155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:01:49.759392  207155 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:01:49.759462  207155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:01:49.767468  207155 start.go:563] Will wait 60s for crictl version
	I1008 23:01:49.767531  207155 ssh_runner.go:195] Run: which crictl
	I1008 23:01:49.774487  207155 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:01:49.812266  207155 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:01:49.812351  207155 ssh_runner.go:195] Run: crio --version
	I1008 23:01:49.843405  207155 ssh_runner.go:195] Run: crio --version
	I1008 23:01:49.882898  207155 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:01:49.885956  207155 cli_runner.go:164] Run: docker network inspect newest-cni-598445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:01:49.909519  207155 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 23:01:49.913676  207155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:01:49.926302  207155 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1008 23:01:49.759795  207624 cli_runner.go:164] Run: docker network inspect auto-840929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:01:49.780614  207624 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1008 23:01:49.785071  207624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:01:49.795771  207624 kubeadm.go:883] updating cluster {Name:auto-840929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-840929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:01:49.795888  207624 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:01:49.795944  207624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:01:49.831870  207624 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:01:49.831891  207624 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:01:49.831944  207624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:01:49.863356  207624 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:01:49.863376  207624 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:01:49.863384  207624 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1008 23:01:49.863472  207624 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-840929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-840929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:01:49.863548  207624 ssh_runner.go:195] Run: crio config
	I1008 23:01:49.932887  207624 cni.go:84] Creating CNI manager for ""
	I1008 23:01:49.932910  207624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:01:49.932927  207624 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1008 23:01:49.932981  207624 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-840929 NodeName:auto-840929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:01:49.933148  207624 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-840929"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:01:49.933236  207624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:01:49.943217  207624 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:01:49.943288  207624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:01:49.952285  207624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1008 23:01:49.969343  207624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:01:49.983957  207624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1008 23:01:49.998855  207624 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:01:50.003857  207624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:01:50.018618  207624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:01:50.190551  207624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:01:50.220266  207624 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929 for IP: 192.168.76.2
	I1008 23:01:50.220298  207624 certs.go:195] generating shared ca certs ...
	I1008 23:01:50.220340  207624 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:50.220522  207624 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:01:50.220595  207624 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:01:50.220610  207624 certs.go:257] generating profile certs ...
	I1008 23:01:50.220691  207624 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.key
	I1008 23:01:50.220709  207624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt with IP's: []
	I1008 23:01:50.624616  207624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt ...
	I1008 23:01:50.624702  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: {Name:mk2e25a075359104debe42c53c851b9e827fa5a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:50.624934  207624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.key ...
	I1008 23:01:50.624978  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.key: {Name:mk9a0cf6a6d755251847a1c4d5aece10fb6c12f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:50.625094  207624 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.key.3f6de284
	I1008 23:01:50.625143  207624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.crt.3f6de284 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1008 23:01:50.815457  207624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.crt.3f6de284 ...
	I1008 23:01:50.815535  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.crt.3f6de284: {Name:mkaf180bc7c81e8398821feadcb20856c8c75afd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:50.815777  207624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.key.3f6de284 ...
	I1008 23:01:50.815818  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.key.3f6de284: {Name:mk1c42d1113485b56c32fb9ab62faf0683a22783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:50.815954  207624 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.crt.3f6de284 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.crt
	I1008 23:01:50.816092  207624 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.key.3f6de284 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.key
	I1008 23:01:50.816192  207624 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.key
	I1008 23:01:50.816236  207624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.crt with IP's: []
	I1008 23:01:51.543991  207624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.crt ...
	I1008 23:01:51.544117  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.crt: {Name:mkdeb9cefcaefd29902d612a71fb801fe585147a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:51.544360  207624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.key ...
	I1008 23:01:51.544374  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.key: {Name:mk6fdef611ec1a4f4ff0396d6e4c1608a5294cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:51.544557  207624 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:01:51.544595  207624 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:01:51.544604  207624 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:01:51.544632  207624 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:01:51.544657  207624 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:01:51.544687  207624 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:01:51.544730  207624 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:01:51.545374  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:01:51.570199  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:01:51.597920  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:01:51.623289  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:01:51.642976  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1008 23:01:51.664117  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 23:01:51.684738  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:01:51.705124  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:01:51.726018  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:01:51.746089  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:01:51.766671  207624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:01:51.786893  207624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:01:51.801343  207624 ssh_runner.go:195] Run: openssl version
	I1008 23:01:51.810142  207624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:01:51.820468  207624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:01:51.824811  207624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:01:51.824897  207624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:01:51.874162  207624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:01:51.892451  207624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:01:51.914288  207624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:01:51.921166  207624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:01:51.921259  207624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:01:51.983804  207624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:01:51.993231  207624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:01:52.003356  207624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:01:52.008851  207624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:01:52.008965  207624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:01:52.057059  207624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:01:52.066949  207624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:01:52.071723  207624 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 23:01:52.071797  207624 kubeadm.go:400] StartCluster: {Name:auto-840929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-840929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:01:52.071900  207624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:01:52.071965  207624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:01:52.119699  207624 cri.go:89] found id: ""
	I1008 23:01:52.119815  207624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:01:52.135144  207624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 23:01:52.154426  207624 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 23:01:52.154524  207624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 23:01:52.172490  207624 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 23:01:52.172514  207624 kubeadm.go:157] found existing configuration files:
	
	I1008 23:01:52.172569  207624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 23:01:52.183573  207624 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 23:01:52.183650  207624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 23:01:52.195612  207624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 23:01:52.204027  207624 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 23:01:52.204093  207624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 23:01:52.212530  207624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 23:01:52.221251  207624 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 23:01:52.221365  207624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 23:01:52.229916  207624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 23:01:52.239298  207624 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 23:01:52.239350  207624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 23:01:52.247487  207624 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 23:01:52.310066  207624 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 23:01:52.310480  207624 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 23:01:52.346196  207624 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 23:01:52.346279  207624 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 23:01:52.346317  207624 kubeadm.go:318] OS: Linux
	I1008 23:01:52.346365  207624 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 23:01:52.346415  207624 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 23:01:52.346465  207624 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 23:01:52.346515  207624 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 23:01:52.346565  207624 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 23:01:52.346630  207624 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 23:01:52.346681  207624 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 23:01:52.346735  207624 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 23:01:52.346783  207624 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 23:01:52.452192  207624 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 23:01:52.452379  207624 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 23:01:52.452514  207624 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 23:01:52.466008  207624 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 23:01:49.929282  207155 kubeadm.go:883] updating cluster {Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:01:49.929430  207155 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:01:49.929512  207155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:01:49.978705  207155 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:01:49.978729  207155 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:01:49.978783  207155 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:01:50.013818  207155 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:01:50.013845  207155 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:01:50.013854  207155 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 23:01:50.013961  207155 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-598445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:01:50.014064  207155 ssh_runner.go:195] Run: crio config
	I1008 23:01:50.123550  207155 cni.go:84] Creating CNI manager for ""
	I1008 23:01:50.123574  207155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:01:50.123619  207155 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1008 23:01:50.123650  207155 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-598445 NodeName:newest-cni-598445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:01:50.123805  207155 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-598445"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:01:50.123898  207155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:01:50.132404  207155 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:01:50.132501  207155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:01:50.141040  207155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 23:01:50.155875  207155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:01:50.169292  207155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1008 23:01:50.183098  207155 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:01:50.187068  207155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:01:50.196686  207155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:01:50.402108  207155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:01:50.417724  207155 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445 for IP: 192.168.85.2
	I1008 23:01:50.417743  207155 certs.go:195] generating shared ca certs ...
	I1008 23:01:50.417764  207155 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:50.417894  207155 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:01:50.417945  207155 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:01:50.417957  207155 certs.go:257] generating profile certs ...
	I1008 23:01:50.418012  207155 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.key
	I1008 23:01:50.418034  207155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.crt with IP's: []
	I1008 23:01:51.782489  207155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.crt ...
	I1008 23:01:51.782556  207155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.crt: {Name:mkf2cb88c8098be1415e838a58d271a943ea2485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:51.782779  207155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.key ...
	I1008 23:01:51.782815  207155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.key: {Name:mk193abc7585031002596e0aa2cf7c8c61e9af41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:51.782948  207155 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key.1a399b11
	I1008 23:01:51.782991  207155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt.1a399b11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1008 23:01:52.474059  207155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt.1a399b11 ...
	I1008 23:01:52.474101  207155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt.1a399b11: {Name:mk29b4c624e27a9da4ad71e944b81655afbaf78b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:52.474326  207155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key.1a399b11 ...
	I1008 23:01:52.474345  207155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key.1a399b11: {Name:mke6f31b4f29843fec8f5d0096a904f962a6b301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:52.474473  207155 certs.go:382] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt.1a399b11 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt
	I1008 23:01:52.474591  207155 certs.go:386] copying /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key.1a399b11 -> /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key
	I1008 23:01:52.474681  207155 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key
	I1008 23:01:52.474743  207155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.crt with IP's: []
	I1008 23:01:53.205976  207155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.crt ...
	I1008 23:01:53.206032  207155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.crt: {Name:mka9ca8c17778d0d39c2ec00a9fd59095872dba5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:53.206277  207155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key ...
	I1008 23:01:53.206316  207155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key: {Name:mk61aef498d6e4aa456650624765d669d0d74d94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:01:53.206549  207155 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:01:53.206618  207155 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:01:53.206644  207155 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:01:53.206693  207155 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:01:53.206748  207155 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:01:53.206802  207155 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:01:53.206868  207155 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:01:53.207471  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:01:53.224354  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:01:53.241620  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:01:53.259543  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:01:53.278778  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 23:01:53.296525  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 23:01:53.316068  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:01:53.335954  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:01:53.365269  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:01:53.399020  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:01:53.427669  207155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:01:53.457454  207155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:01:53.478887  207155 ssh_runner.go:195] Run: openssl version
	I1008 23:01:53.486032  207155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:01:53.495206  207155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:01:53.499500  207155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:01:53.499588  207155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:01:53.541513  207155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:01:53.550842  207155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:01:53.560005  207155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:01:53.564691  207155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:01:53.564784  207155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:01:53.622157  207155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:01:53.632945  207155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:01:53.650688  207155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:01:53.655757  207155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:01:53.655864  207155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:01:53.731116  207155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:01:53.741754  207155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:01:53.746654  207155 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1008 23:01:53.746740  207155 kubeadm.go:400] StartCluster: {Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:01:53.746847  207155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:01:53.746913  207155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:01:53.788273  207155 cri.go:89] found id: ""
	I1008 23:01:53.788373  207155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:01:53.798667  207155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 23:01:53.807306  207155 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1008 23:01:53.807393  207155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 23:01:53.818470  207155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 23:01:53.818493  207155 kubeadm.go:157] found existing configuration files:
	
	I1008 23:01:53.818574  207155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1008 23:01:53.827991  207155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 23:01:53.828079  207155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 23:01:53.836266  207155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1008 23:01:53.845361  207155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 23:01:53.845464  207155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 23:01:53.853671  207155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1008 23:01:53.862971  207155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 23:01:53.863064  207155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 23:01:53.871253  207155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1008 23:01:53.880417  207155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 23:01:53.880515  207155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 23:01:53.888871  207155 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1008 23:01:53.941678  207155 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1008 23:01:53.942134  207155 kubeadm.go:318] [preflight] Running pre-flight checks
	I1008 23:01:53.986488  207155 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1008 23:01:53.986562  207155 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1008 23:01:53.986605  207155 kubeadm.go:318] OS: Linux
	I1008 23:01:53.986658  207155 kubeadm.go:318] CGROUPS_CPU: enabled
	I1008 23:01:53.986713  207155 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1008 23:01:53.986769  207155 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1008 23:01:53.986824  207155 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1008 23:01:53.986879  207155 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1008 23:01:53.986934  207155 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1008 23:01:53.986986  207155 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1008 23:01:53.987039  207155 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1008 23:01:53.987097  207155 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1008 23:01:54.073859  207155 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 23:01:54.073976  207155 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 23:01:54.074077  207155 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1008 23:01:54.090005  207155 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 23:01:54.096143  207155 out.go:252]   - Generating certificates and keys ...
	I1008 23:01:54.096248  207155 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 23:01:54.096325  207155 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 23:01:54.519155  207155 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 23:01:54.737020  207155 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 23:01:52.472254  207624 out.go:252]   - Generating certificates and keys ...
	I1008 23:01:52.472352  207624 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1008 23:01:52.472422  207624 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1008 23:01:53.016310  207624 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1008 23:01:53.701254  207624 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1008 23:01:54.055333  207624 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 23:01:54.641990  207624 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 23:01:55.146008  207624 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 23:01:55.146145  207624 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-840929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 23:01:55.993984  207624 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 23:01:55.994119  207624 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-840929 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1008 23:01:56.212944  207624 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 23:01:56.605993  207624 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 23:01:57.213978  207624 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 23:01:57.214052  207624 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 23:01:57.494012  207624 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 23:01:57.941985  207624 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 23:01:58.584620  207624 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 23:01:59.002014  207624 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 23:01:59.201998  207624 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 23:01:59.202098  207624 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 23:01:59.202179  207624 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 23:01:55.377894  207155 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1008 23:01:55.604667  207155 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1008 23:01:56.134228  207155 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1008 23:01:56.135330  207155 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-598445] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 23:01:57.505951  207155 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1008 23:01:57.506096  207155 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-598445] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1008 23:01:58.496000  207155 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1008 23:01:58.736222  207155 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1008 23:01:59.770111  207155 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1008 23:01:59.770193  207155 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 23:01:59.206048  207624 out.go:252]   - Booting up control plane ...
	I1008 23:01:59.206155  207624 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 23:01:59.206433  207624 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 23:01:59.209919  207624 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 23:01:59.238044  207624 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 23:01:59.238158  207624 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 23:01:59.250179  207624 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 23:01:59.250306  207624 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 23:01:59.250350  207624 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 23:01:59.428724  207624 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 23:01:59.428856  207624 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 23:02:01.432956  207624 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001757843s
	I1008 23:02:01.433902  207624 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 23:02:01.434152  207624 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1008 23:02:01.434252  207624 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 23:02:01.434435  207624 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 23:02:00.779164  207155 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 23:02:01.499746  207155 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1008 23:02:02.050001  207155 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 23:02:02.734011  207155 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 23:02:03.467673  207155 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 23:02:03.468309  207155 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 23:02:03.471420  207155 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 23:02:03.474691  207155 out.go:252]   - Booting up control plane ...
	I1008 23:02:03.474873  207155 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 23:02:03.478617  207155 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 23:02:03.483427  207155 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 23:02:03.523346  207155 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 23:02:03.523465  207155 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1008 23:02:03.532117  207155 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1008 23:02:03.532229  207155 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 23:02:03.532276  207155 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1008 23:02:03.818072  207155 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1008 23:02:03.818204  207155 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1008 23:02:04.801979  207155 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001467882s
	I1008 23:02:04.812406  207155 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1008 23:02:04.812513  207155 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1008 23:02:04.812833  207155 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1008 23:02:04.812928  207155 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1008 23:02:08.029350  207624 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.594736364s
	I1008 23:02:10.205429  207624 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.77055191s
	I1008 23:02:12.436665  207624 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.002364576s
	I1008 23:02:12.466753  207624 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 23:02:12.486897  207624 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 23:02:12.506523  207624 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 23:02:12.506744  207624 kubeadm.go:318] [mark-control-plane] Marking the node auto-840929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 23:02:12.529849  207624 kubeadm.go:318] [bootstrap-token] Using token: sbfza2.z1833u4sl1thhqj9
	I1008 23:02:12.532796  207624 out.go:252]   - Configuring RBAC rules ...
	I1008 23:02:12.532936  207624 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 23:02:12.539753  207624 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 23:02:12.559132  207624 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 23:02:12.568244  207624 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 23:02:12.572628  207624 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 23:02:12.578228  207624 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 23:02:12.843711  207624 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 23:02:13.437346  207624 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 23:02:13.846866  207624 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 23:02:13.848816  207624 kubeadm.go:318] 
	I1008 23:02:13.848894  207624 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 23:02:13.848900  207624 kubeadm.go:318] 
	I1008 23:02:13.848981  207624 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 23:02:13.848986  207624 kubeadm.go:318] 
	I1008 23:02:13.849012  207624 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 23:02:13.849543  207624 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 23:02:13.849606  207624 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 23:02:13.849611  207624 kubeadm.go:318] 
	I1008 23:02:13.849698  207624 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 23:02:13.849704  207624 kubeadm.go:318] 
	I1008 23:02:13.849754  207624 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 23:02:13.849758  207624 kubeadm.go:318] 
	I1008 23:02:13.849813  207624 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 23:02:13.849891  207624 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 23:02:13.849963  207624 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 23:02:13.849967  207624 kubeadm.go:318] 
	I1008 23:02:13.850345  207624 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 23:02:13.850438  207624 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 23:02:13.850443  207624 kubeadm.go:318] 
	I1008 23:02:13.850778  207624 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token sbfza2.z1833u4sl1thhqj9 \
	I1008 23:02:13.850894  207624 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 23:02:13.851164  207624 kubeadm.go:318] 	--control-plane 
	I1008 23:02:13.851181  207624 kubeadm.go:318] 
	I1008 23:02:13.851474  207624 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 23:02:13.851484  207624 kubeadm.go:318] 
	I1008 23:02:13.851826  207624 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token sbfza2.z1833u4sl1thhqj9 \
	I1008 23:02:13.852138  207624 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 23:02:13.858079  207624 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 23:02:13.858337  207624 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 23:02:13.858447  207624 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 23:02:13.858462  207624 cni.go:84] Creating CNI manager for ""
	I1008 23:02:13.858469  207624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:02:13.861708  207624 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 23:02:12.116810  207155 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.303991312s
	I1008 23:02:13.123273  207155 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.310868678s
	I1008 23:02:15.315613  207155 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502956247s
	I1008 23:02:15.335852  207155 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 23:02:15.356190  207155 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 23:02:15.376212  207155 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 23:02:15.376424  207155 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-598445 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 23:02:15.391338  207155 kubeadm.go:318] [bootstrap-token] Using token: 054yki.v49oul6wom4m3iz4
	I1008 23:02:15.396189  207155 out.go:252]   - Configuring RBAC rules ...
	I1008 23:02:15.396329  207155 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 23:02:15.404428  207155 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 23:02:15.419923  207155 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 23:02:15.426265  207155 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 23:02:15.437120  207155 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 23:02:15.447840  207155 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 23:02:15.722421  207155 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 23:02:16.227840  207155 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1008 23:02:16.722178  207155 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1008 23:02:16.723408  207155 kubeadm.go:318] 
	I1008 23:02:16.723486  207155 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1008 23:02:16.723499  207155 kubeadm.go:318] 
	I1008 23:02:16.723592  207155 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1008 23:02:16.723605  207155 kubeadm.go:318] 
	I1008 23:02:16.723632  207155 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1008 23:02:16.723698  207155 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 23:02:16.723755  207155 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 23:02:16.723763  207155 kubeadm.go:318] 
	I1008 23:02:16.723819  207155 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1008 23:02:16.723829  207155 kubeadm.go:318] 
	I1008 23:02:16.723880  207155 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 23:02:16.723889  207155 kubeadm.go:318] 
	I1008 23:02:16.723944  207155 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1008 23:02:16.724027  207155 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 23:02:16.724102  207155 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 23:02:16.724112  207155 kubeadm.go:318] 
	I1008 23:02:16.724207  207155 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 23:02:16.724291  207155 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1008 23:02:16.724301  207155 kubeadm.go:318] 
	I1008 23:02:16.724389  207155 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 054yki.v49oul6wom4m3iz4 \
	I1008 23:02:16.724500  207155 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 \
	I1008 23:02:16.724526  207155 kubeadm.go:318] 	--control-plane 
	I1008 23:02:16.724534  207155 kubeadm.go:318] 
	I1008 23:02:16.724637  207155 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1008 23:02:16.724646  207155 kubeadm.go:318] 
	I1008 23:02:16.724732  207155 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 054yki.v49oul6wom4m3iz4 \
	I1008 23:02:16.724843  207155 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a17c01f80b9e30c7c4122c8aaaeb5705c741ce470a8a05d86b8146c319369185 
	I1008 23:02:16.728611  207155 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1008 23:02:16.728891  207155 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1008 23:02:16.729051  207155 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 23:02:16.729072  207155 cni.go:84] Creating CNI manager for ""
	I1008 23:02:16.729081  207155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:02:16.732250  207155 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1008 23:02:13.864668  207624 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 23:02:13.872288  207624 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 23:02:13.872310  207624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 23:02:13.891167  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 23:02:14.423507  207624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 23:02:14.423649  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:14.423747  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-840929 minikube.k8s.io/updated_at=2025_10_08T23_02_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=auto-840929 minikube.k8s.io/primary=true
	I1008 23:02:14.836908  207624 ops.go:34] apiserver oom_adj: -16
	I1008 23:02:14.837017  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:15.337586  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:15.837060  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:16.337981  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:16.837747  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:17.337825  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:17.837238  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:18.337561  207624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:18.476569  207624 kubeadm.go:1113] duration metric: took 4.052967839s to wait for elevateKubeSystemPrivileges
	I1008 23:02:18.476639  207624 kubeadm.go:402] duration metric: took 26.404852523s to StartCluster
	I1008 23:02:18.476672  207624 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:18.476774  207624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:18.477454  207624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:18.477773  207624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 23:02:18.477781  207624 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:02:18.478029  207624 config.go:182] Loaded profile config "auto-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:18.478069  207624 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:02:18.478137  207624 addons.go:69] Setting storage-provisioner=true in profile "auto-840929"
	I1008 23:02:18.478153  207624 addons.go:238] Setting addon storage-provisioner=true in "auto-840929"
	I1008 23:02:18.478188  207624 host.go:66] Checking if "auto-840929" exists ...
	I1008 23:02:18.478652  207624 cli_runner.go:164] Run: docker container inspect auto-840929 --format={{.State.Status}}
	I1008 23:02:18.479108  207624 addons.go:69] Setting default-storageclass=true in profile "auto-840929"
	I1008 23:02:18.479129  207624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-840929"
	I1008 23:02:18.479397  207624 cli_runner.go:164] Run: docker container inspect auto-840929 --format={{.State.Status}}
	I1008 23:02:18.482734  207624 out.go:179] * Verifying Kubernetes components...
	I1008 23:02:18.485401  207624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:18.528016  207624 addons.go:238] Setting addon default-storageclass=true in "auto-840929"
	I1008 23:02:18.528056  207624 host.go:66] Checking if "auto-840929" exists ...
	I1008 23:02:18.528463  207624 cli_runner.go:164] Run: docker container inspect auto-840929 --format={{.State.Status}}
	I1008 23:02:18.536925  207624 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:02:16.735034  207155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1008 23:02:16.739511  207155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1008 23:02:16.739531  207155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1008 23:02:16.754449  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1008 23:02:17.147785  207155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 23:02:17.147936  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:17.148013  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-598445 minikube.k8s.io/updated_at=2025_10_08T23_02_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555 minikube.k8s.io/name=newest-cni-598445 minikube.k8s.io/primary=true
	I1008 23:02:17.327203  207155 ops.go:34] apiserver oom_adj: -16
	I1008 23:02:17.327313  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:17.828082  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:18.328352  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:18.827859  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:19.327729  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:19.828169  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:18.540057  207624 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:18.540080  207624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:02:18.540151  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:02:18.569750  207624 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:18.569772  207624 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:02:18.569839  207624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-840929
	I1008 23:02:18.595399  207624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa Username:docker}
	I1008 23:02:18.606101  207624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/auto-840929/id_rsa Username:docker}
	I1008 23:02:18.783199  207624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 23:02:18.818085  207624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:02:18.823015  207624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:18.868084  207624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:19.767175  207624 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1008 23:02:19.769384  207624 node_ready.go:35] waiting up to 15m0s for node "auto-840929" to be "Ready" ...
	I1008 23:02:20.127471  207624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.304422325s)
	I1008 23:02:20.127497  207624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.259337398s)
	I1008 23:02:20.146479  207624 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1008 23:02:20.149307  207624 addons.go:514] duration metric: took 1.671215635s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1008 23:02:20.271414  207624 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-840929" context rescaled to 1 replicas
	I1008 23:02:20.327375  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:20.828002  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:21.327503  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:21.827733  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:22.328336  207155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 23:02:22.462861  207155 kubeadm.go:1113] duration metric: took 5.314984731s to wait for elevateKubeSystemPrivileges
	I1008 23:02:22.462890  207155 kubeadm.go:402] duration metric: took 28.71615477s to StartCluster
	I1008 23:02:22.462907  207155 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:22.462969  207155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:22.463924  207155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:22.464154  207155 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:02:22.464293  207155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1008 23:02:22.464567  207155 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:22.464550  207155 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:02:22.464636  207155 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-598445"
	I1008 23:02:22.464664  207155 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-598445"
	I1008 23:02:22.464692  207155 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:22.465185  207155 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:22.465444  207155 addons.go:69] Setting default-storageclass=true in profile "newest-cni-598445"
	I1008 23:02:22.465461  207155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-598445"
	I1008 23:02:22.465780  207155 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:22.467293  207155 out.go:179] * Verifying Kubernetes components...
	I1008 23:02:22.470754  207155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:22.495193  207155 addons.go:238] Setting addon default-storageclass=true in "newest-cni-598445"
	I1008 23:02:22.495233  207155 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:22.495664  207155 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:22.513035  207155 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 23:02:22.516703  207155 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:22.516735  207155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:02:22.516797  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:22.544978  207155 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:22.544999  207155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:02:22.545069  207155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:22.569748  207155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:22.584047  207155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:22.790266  207155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1008 23:02:22.830274  207155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:22.868995  207155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:02:22.897179  207155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:23.229823  207155 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:02:23.229882  207155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:02:23.229929  207155 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1008 23:02:23.607059  207155 api_server.go:72] duration metric: took 1.142876567s to wait for apiserver process to appear ...
	I1008 23:02:23.607083  207155 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:02:23.607111  207155 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 23:02:23.609407  207155 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1008 23:02:23.613288  207155 addons.go:514] duration metric: took 1.148720371s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1008 23:02:23.617544  207155 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 23:02:23.618470  207155 api_server.go:141] control plane version: v1.34.1
	I1008 23:02:23.618494  207155 api_server.go:131] duration metric: took 11.403723ms to wait for apiserver health ...
	I1008 23:02:23.618503  207155 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:02:23.624030  207155 system_pods.go:59] 8 kube-system pods found
	I1008 23:02:23.624077  207155 system_pods.go:61] "coredns-66bc5c9577-2qjrv" [ec8d975b-2220-48dc-9c8c-65169391c742] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1008 23:02:23.624087  207155 system_pods.go:61] "etcd-newest-cni-598445" [474854ed-7e7c-49d6-9fb8-b572780f4e37] Running
	I1008 23:02:23.624093  207155 system_pods.go:61] "kindnet-26wwk" [4c47d037-c2a6-404d-82fd-1efa6e55ad21] Running
	I1008 23:02:23.624098  207155 system_pods.go:61] "kube-apiserver-newest-cni-598445" [7fc58799-0bb0-45ee-a53c-b51583ec84ea] Running
	I1008 23:02:23.624110  207155 system_pods.go:61] "kube-controller-manager-newest-cni-598445" [c0fdc1df-8d0b-4bb3-a383-9d9fd102b6a6] Running
	I1008 23:02:23.624119  207155 system_pods.go:61] "kube-proxy-qjt47" [d3bc119f-422b-4196-a3e2-c9daa5264ebc] Running
	I1008 23:02:23.624124  207155 system_pods.go:61] "kube-scheduler-newest-cni-598445" [c795f706-3409-4b43-b1f8-2f3a465a03d7] Running
	I1008 23:02:23.624133  207155 system_pods.go:61] "storage-provisioner" [03aabe9b-e840-4770-bff2-e17a5caad244] Pending
	I1008 23:02:23.624139  207155 system_pods.go:74] duration metric: took 5.629435ms to wait for pod list to return data ...
	I1008 23:02:23.624148  207155 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:02:23.635707  207155 default_sa.go:45] found service account: "default"
	I1008 23:02:23.635739  207155 default_sa.go:55] duration metric: took 11.581737ms for default service account to be created ...
	I1008 23:02:23.635752  207155 kubeadm.go:586] duration metric: took 1.171575515s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 23:02:23.635770  207155 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:02:23.639354  207155 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:02:23.639387  207155 node_conditions.go:123] node cpu capacity is 2
	I1008 23:02:23.639400  207155 node_conditions.go:105] duration metric: took 3.624508ms to run NodePressure ...
	I1008 23:02:23.639413  207155 start.go:241] waiting for startup goroutines ...
	I1008 23:02:23.734065  207155 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-598445" context rescaled to 1 replicas
	I1008 23:02:23.734114  207155 start.go:246] waiting for cluster config update ...
	I1008 23:02:23.734145  207155 start.go:255] writing updated cluster config ...
	I1008 23:02:23.734492  207155 ssh_runner.go:195] Run: rm -f paused
	I1008 23:02:23.795665  207155 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:02:23.798947  207155 out.go:179] * Done! kubectl is now configured to use "newest-cni-598445" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.104110574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.111000834Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4b5f7507-dbf2-49f4-aee6-86b205aa6675 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.117740576Z" level=info msg="Ran pod sandbox d737ea443bca1815bc9a7bc3873d229ecc4b5db7d00bc2f582313150223b1125 with infra container: kube-system/kindnet-26wwk/POD" id=4b5f7507-dbf2-49f4-aee6-86b205aa6675 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.121354164Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-qjt47/POD" id=39eb79e8-e352-4a23-bbe5-ae76d03e2b1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.121428413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.12496588Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=39eb79e8-e352-4a23-bbe5-ae76d03e2b1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.125839623Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e640e9ce-951f-434c-b3f6-2fe8aef5d4f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.129394601Z" level=info msg="Ran pod sandbox f0621e4480d08ef7ea958f1649d777f192bc37e7618d6d71089a810f4f5203ce with infra container: kube-system/kube-proxy-qjt47/POD" id=39eb79e8-e352-4a23-bbe5-ae76d03e2b1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.134994882Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=45eedf50-c518-4534-840f-a89e2ad613ca name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.13533873Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=69a5d278-eec8-4b19-b5b4-f87f858e766c name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.138862807Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=15de0f75-9766-449f-bd44-f98268d5bd71 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.143317046Z" level=info msg="Creating container: kube-system/kindnet-26wwk/kindnet-cni" id=4b744f4a-801d-4cdd-bee1-cfabf93e9212 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.144118583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.146756337Z" level=info msg="Creating container: kube-system/kube-proxy-qjt47/kube-proxy" id=57894913-f3ac-4712-b8f0-e90b6e00f567 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.150019487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.155670591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.156172465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.157090172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.165809551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.18415119Z" level=info msg="Created container 769da4eb8c28384168b0f4bbc0d92176f4676ef3ac3fdaf8e9608bfeac7d07c4: kube-system/kindnet-26wwk/kindnet-cni" id=4b744f4a-801d-4cdd-bee1-cfabf93e9212 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.186090851Z" level=info msg="Starting container: 769da4eb8c28384168b0f4bbc0d92176f4676ef3ac3fdaf8e9608bfeac7d07c4" id=6df93bbb-e1e1-48ac-965b-0ce56c093dc0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.197317252Z" level=info msg="Started container" PID=1439 containerID=769da4eb8c28384168b0f4bbc0d92176f4676ef3ac3fdaf8e9608bfeac7d07c4 description=kube-system/kindnet-26wwk/kindnet-cni id=6df93bbb-e1e1-48ac-965b-0ce56c093dc0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d737ea443bca1815bc9a7bc3873d229ecc4b5db7d00bc2f582313150223b1125
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.231483769Z" level=info msg="Created container 612bc40cc018464b2844c90006bbc066ffeca9b1f155e6fcabeb61bb4dc2ddbd: kube-system/kube-proxy-qjt47/kube-proxy" id=57894913-f3ac-4712-b8f0-e90b6e00f567 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.232346772Z" level=info msg="Starting container: 612bc40cc018464b2844c90006bbc066ffeca9b1f155e6fcabeb61bb4dc2ddbd" id=479de67e-32b3-4c82-8989-d692ec214b88 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:02:22 newest-cni-598445 crio[841]: time="2025-10-08T23:02:22.236083791Z" level=info msg="Started container" PID=1444 containerID=612bc40cc018464b2844c90006bbc066ffeca9b1f155e6fcabeb61bb4dc2ddbd description=kube-system/kube-proxy-qjt47/kube-proxy id=479de67e-32b3-4c82-8989-d692ec214b88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0621e4480d08ef7ea958f1649d777f192bc37e7618d6d71089a810f4f5203ce
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	612bc40cc0184       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   f0621e4480d08       kube-proxy-qjt47                            kube-system
	769da4eb8c283       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   d737ea443bca1       kindnet-26wwk                               kube-system
	0468e1ce1e4e0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago      Running             kube-scheduler            0                   6704e96195e09       kube-scheduler-newest-cni-598445            kube-system
	fb533de9b8b6a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago      Running             kube-controller-manager   0                   88f0c47c82f8f       kube-controller-manager-newest-cni-598445   kube-system
	5ec30587a9f4a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago      Running             kube-apiserver            0                   d50110abe8156       kube-apiserver-newest-cni-598445            kube-system
	3c2fe63f18b46       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago      Running             etcd                      0                   66fd26ab999a9       etcd-newest-cni-598445                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-598445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-598445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=newest-cni-598445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T23_02_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 23:02:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-598445
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 23:02:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 23:02:16 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 23:02:16 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 23:02:16 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 08 Oct 2025 23:02:16 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-598445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8af15d21f944298ac182cebf3920594
	  System UUID:                fc86293c-d5bb-4314-9225-e89a6cd1ff6e
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-598445                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-26wwk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-598445             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-598445    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-qjt47                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-598445             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 21s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x8 over 21s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-598445 event: Registered Node newest-cni-598445 in Controller
	
	
	==> dmesg <==
	[  +0.954145] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:00] overlayfs: idmapped layers are currently not supported
	[  +1.568442] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:02] overlayfs: idmapped layers are currently not supported
	[  +3.214273] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3c2fe63f18b46a855602fdb835b3448285f3ef43cc632ce19605041246f20bb3] <==
	{"level":"warn","ts":"2025-10-08T23:02:11.087642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.152322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.169856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.207683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.238853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.278859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.300334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.345821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.371879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.405231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.431064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.465741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.489021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.524989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.562090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.578796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.621299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.628995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.712568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.723552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.774893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.801705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.820787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:11.847964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:12.045310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40490","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:02:25 up  1:44,  0 user,  load average: 6.00, 3.43, 2.37
	Linux newest-cni-598445 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [769da4eb8c28384168b0f4bbc0d92176f4676ef3ac3fdaf8e9608bfeac7d07c4] <==
	I1008 23:02:22.306874       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 23:02:22.307364       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 23:02:22.307649       1 main.go:148] setting mtu 1500 for CNI 
	I1008 23:02:22.307692       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 23:02:22.307760       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T23:02:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 23:02:22.514680       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 23:02:22.514703       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 23:02:22.514712       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 23:02:22.515370       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5ec30587a9f4a87e66d4279cbf5aecaf5a1dc8a7cf12638f74002f035d20db80] <==
	I1008 23:02:13.111952       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 23:02:13.111960       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1008 23:02:13.119323       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 23:02:13.138010       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 23:02:13.163910       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1008 23:02:13.171982       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:02:13.216374       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:02:13.216475       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 23:02:13.716557       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1008 23:02:13.727577       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1008 23:02:13.727600       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 23:02:15.126702       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 23:02:15.183793       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 23:02:15.257197       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1008 23:02:15.267437       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1008 23:02:15.268617       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 23:02:15.276446       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 23:02:15.965961       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 23:02:16.188247       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 23:02:16.226405       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1008 23:02:16.263047       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1008 23:02:21.661261       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:02:21.666603       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1008 23:02:21.758796       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1008 23:02:21.918121       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fb533de9b8b6a374771cbf3b84aee6a87fb1f48715cdcf4df8b0c659f8177b64] <==
	I1008 23:02:21.044056       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1008 23:02:21.052114       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1008 23:02:21.052248       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1008 23:02:21.052864       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 23:02:21.053027       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 23:02:21.053086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 23:02:21.053684       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1008 23:02:21.053813       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 23:02:21.053881       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 23:02:21.053941       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-598445"
	I1008 23:02:21.053976       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 23:02:21.063068       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 23:02:21.063292       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1008 23:02:21.063529       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 23:02:21.071330       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 23:02:21.072258       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-598445" podCIDRs=["10.42.0.0/24"]
	I1008 23:02:21.072399       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1008 23:02:21.072446       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 23:02:21.072456       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1008 23:02:21.078090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1008 23:02:21.090672       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 23:02:21.115677       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:02:21.200578       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:02:21.200622       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1008 23:02:21.200630       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [612bc40cc018464b2844c90006bbc066ffeca9b1f155e6fcabeb61bb4dc2ddbd] <==
	I1008 23:02:22.293207       1 server_linux.go:53] "Using iptables proxy"
	I1008 23:02:22.428195       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 23:02:22.541782       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 23:02:22.541823       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 23:02:22.541910       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 23:02:22.610090       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 23:02:22.610150       1 server_linux.go:132] "Using iptables Proxier"
	I1008 23:02:22.622551       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 23:02:22.622911       1 server.go:527] "Version info" version="v1.34.1"
	I1008 23:02:22.622934       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:02:22.629606       1 config.go:200] "Starting service config controller"
	I1008 23:02:22.630819       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 23:02:22.630860       1 config.go:106] "Starting endpoint slice config controller"
	I1008 23:02:22.630872       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 23:02:22.630893       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 23:02:22.630907       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 23:02:22.631811       1 config.go:309] "Starting node config controller"
	I1008 23:02:22.631825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 23:02:22.631831       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 23:02:22.731573       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 23:02:22.731614       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 23:02:22.731656       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0468e1ce1e4e0d21d27203a91f91ff27cae0f8350ee06f72698cacdfbaf3e82a] <==
	E1008 23:02:13.140197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 23:02:13.140352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 23:02:13.140528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 23:02:13.141101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 23:02:13.141191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 23:02:13.950957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1008 23:02:13.973441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1008 23:02:14.032857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1008 23:02:14.173724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1008 23:02:14.223423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1008 23:02:14.245974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1008 23:02:14.268791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1008 23:02:14.277278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1008 23:02:14.368854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1008 23:02:14.388100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1008 23:02:14.403469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1008 23:02:14.408209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1008 23:02:14.480046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1008 23:02:14.486041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1008 23:02:14.486375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1008 23:02:14.596581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1008 23:02:14.603404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1008 23:02:14.620502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1008 23:02:14.705161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1008 23:02:16.180731       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 08 23:02:16 newest-cni-598445 kubelet[1321]: I1008 23:02:16.560935    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/0ade283dd00443700c5936cbda808b82-etcd-data\") pod \"etcd-newest-cni-598445\" (UID: \"0ade283dd00443700c5936cbda808b82\") " pod="kube-system/etcd-newest-cni-598445"
	Oct 08 23:02:16 newest-cni-598445 kubelet[1321]: I1008 23:02:16.560952    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d4982121276c13f913ec3ee1e2541dd-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-598445\" (UID: \"4d4982121276c13f913ec3ee1e2541dd\") " pod="kube-system/kube-controller-manager-newest-cni-598445"
	Oct 08 23:02:16 newest-cni-598445 kubelet[1321]: I1008 23:02:16.560977    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53ae5ec32867eb8923b530c3ad893bd4-kubeconfig\") pod \"kube-scheduler-newest-cni-598445\" (UID: \"53ae5ec32867eb8923b530c3ad893bd4\") " pod="kube-system/kube-scheduler-newest-cni-598445"
	Oct 08 23:02:16 newest-cni-598445 kubelet[1321]: I1008 23:02:16.560994    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d4982121276c13f913ec3ee1e2541dd-ca-certs\") pod \"kube-controller-manager-newest-cni-598445\" (UID: \"4d4982121276c13f913ec3ee1e2541dd\") " pod="kube-system/kube-controller-manager-newest-cni-598445"
	Oct 08 23:02:16 newest-cni-598445 kubelet[1321]: I1008 23:02:16.561012    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d4982121276c13f913ec3ee1e2541dd-k8s-certs\") pod \"kube-controller-manager-newest-cni-598445\" (UID: \"4d4982121276c13f913ec3ee1e2541dd\") " pod="kube-system/kube-controller-manager-newest-cni-598445"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: I1008 23:02:17.099278    1321 apiserver.go:52] "Watching apiserver"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: I1008 23:02:17.143607    1321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: I1008 23:02:17.296160    1321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-598445"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: E1008 23:02:17.346560    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-598445\" already exists" pod="kube-system/kube-scheduler-newest-cni-598445"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: I1008 23:02:17.393999    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-598445" podStartSLOduration=1.393981363 podStartE2EDuration="1.393981363s" podCreationTimestamp="2025-10-08 23:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 23:02:17.372659483 +0000 UTC m=+1.378794478" watchObservedRunningTime="2025-10-08 23:02:17.393981363 +0000 UTC m=+1.400116432"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: I1008 23:02:17.425817    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-598445" podStartSLOduration=3.42579599 podStartE2EDuration="3.42579599s" podCreationTimestamp="2025-10-08 23:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 23:02:17.394404483 +0000 UTC m=+1.400539503" watchObservedRunningTime="2025-10-08 23:02:17.42579599 +0000 UTC m=+1.431931034"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: I1008 23:02:17.445141    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-598445" podStartSLOduration=1.445109753 podStartE2EDuration="1.445109753s" podCreationTimestamp="2025-10-08 23:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 23:02:17.426313601 +0000 UTC m=+1.432448596" watchObservedRunningTime="2025-10-08 23:02:17.445109753 +0000 UTC m=+1.451244748"
	Oct 08 23:02:17 newest-cni-598445 kubelet[1321]: I1008 23:02:17.482480    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-598445" podStartSLOduration=1.4824606839999999 podStartE2EDuration="1.482460684s" podCreationTimestamp="2025-10-08 23:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 23:02:17.44557714 +0000 UTC m=+1.451712135" watchObservedRunningTime="2025-10-08 23:02:17.482460684 +0000 UTC m=+1.488595679"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.118938    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.120190    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912203    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-lib-modules\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912259    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjgc8\" (UniqueName: \"kubernetes.io/projected/4c47d037-c2a6-404d-82fd-1efa6e55ad21-kube-api-access-xjgc8\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912292    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3bc119f-422b-4196-a3e2-c9daa5264ebc-kube-proxy\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912317    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3bc119f-422b-4196-a3e2-c9daa5264ebc-lib-modules\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912339    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-cni-cfg\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912356    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3bc119f-422b-4196-a3e2-c9daa5264ebc-xtables-lock\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912377    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqm4\" (UniqueName: \"kubernetes.io/projected/d3bc119f-422b-4196-a3e2-c9daa5264ebc-kube-api-access-nfqm4\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:21 newest-cni-598445 kubelet[1321]: I1008 23:02:21.912396    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-xtables-lock\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:22 newest-cni-598445 kubelet[1321]: I1008 23:02:22.030130    1321 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 08 23:02:22 newest-cni-598445 kubelet[1321]: I1008 23:02:22.371970    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjt47" podStartSLOduration=1.371948416 podStartE2EDuration="1.371948416s" podCreationTimestamp="2025-10-08 23:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-08 23:02:22.350907089 +0000 UTC m=+6.357042076" watchObservedRunningTime="2025-10-08 23:02:22.371948416 +0000 UTC m=+6.378083403"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-598445 -n newest-cni-598445
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-598445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-2qjrv storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner: exit status 1 (93.162286ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-2qjrv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-598445 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-598445 --alsologtostderr -v=1: exit status 80 (2.173080029s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-598445 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 23:02:43.981109  214454 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:02:43.981837  214454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:02:43.981875  214454 out.go:374] Setting ErrFile to fd 2...
	I1008 23:02:43.981895  214454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:02:43.982197  214454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:02:43.982609  214454 out.go:368] Setting JSON to false
	I1008 23:02:43.982665  214454 mustload.go:65] Loading cluster: newest-cni-598445
	I1008 23:02:43.983087  214454 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:43.983601  214454 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:44.003413  214454 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:44.003759  214454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:02:44.107411  214454 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-08 23:02:44.096057388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:02:44.108105  214454 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-598445 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1008 23:02:44.111680  214454 out.go:179] * Pausing node newest-cni-598445 ... 
	I1008 23:02:44.114673  214454 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:44.115026  214454 ssh_runner.go:195] Run: systemctl --version
	I1008 23:02:44.115067  214454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:44.135375  214454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:44.241199  214454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:02:44.264321  214454 pause.go:52] kubelet running: true
	I1008 23:02:44.264388  214454 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:02:44.576924  214454 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:02:44.577028  214454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:02:44.690138  214454 cri.go:89] found id: "7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3"
	I1008 23:02:44.690170  214454 cri.go:89] found id: "8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061"
	I1008 23:02:44.690175  214454 cri.go:89] found id: "0bc68321c46808e03761c9cb44ba461f8c6e1bc97bd15431fd9e2c7996063a0a"
	I1008 23:02:44.690180  214454 cri.go:89] found id: "3e64781da390812405d95dae933cfc7c3dd1126eb65abc070fb5a61a4a805bd1"
	I1008 23:02:44.690183  214454 cri.go:89] found id: "a02593f091bf4e1b869b3247780e1413dad2f19604ea728bddf11702a939688d"
	I1008 23:02:44.690187  214454 cri.go:89] found id: "2e1d9249276823a522db408755e54fa95f368a0472e8b4462337afea4b239c01"
	I1008 23:02:44.690190  214454 cri.go:89] found id: ""
	I1008 23:02:44.690237  214454 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:02:44.703216  214454 retry.go:31] will retry after 300.562753ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:02:44Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:02:45.006213  214454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:02:45.034781  214454 pause.go:52] kubelet running: false
	I1008 23:02:45.034858  214454 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:02:45.326155  214454 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:02:45.326244  214454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:02:45.447912  214454 cri.go:89] found id: "7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3"
	I1008 23:02:45.447932  214454 cri.go:89] found id: "8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061"
	I1008 23:02:45.447937  214454 cri.go:89] found id: "0bc68321c46808e03761c9cb44ba461f8c6e1bc97bd15431fd9e2c7996063a0a"
	I1008 23:02:45.447941  214454 cri.go:89] found id: "3e64781da390812405d95dae933cfc7c3dd1126eb65abc070fb5a61a4a805bd1"
	I1008 23:02:45.447944  214454 cri.go:89] found id: "a02593f091bf4e1b869b3247780e1413dad2f19604ea728bddf11702a939688d"
	I1008 23:02:45.447947  214454 cri.go:89] found id: "2e1d9249276823a522db408755e54fa95f368a0472e8b4462337afea4b239c01"
	I1008 23:02:45.447951  214454 cri.go:89] found id: ""
	I1008 23:02:45.448009  214454 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:02:45.462852  214454 retry.go:31] will retry after 336.576029ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:02:45Z" level=error msg="open /run/runc: no such file or directory"
	I1008 23:02:45.800420  214454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 23:02:45.814805  214454 pause.go:52] kubelet running: false
	I1008 23:02:45.814870  214454 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1008 23:02:45.965695  214454 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1008 23:02:45.965772  214454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1008 23:02:46.046703  214454 cri.go:89] found id: "7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3"
	I1008 23:02:46.046724  214454 cri.go:89] found id: "8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061"
	I1008 23:02:46.046729  214454 cri.go:89] found id: "0bc68321c46808e03761c9cb44ba461f8c6e1bc97bd15431fd9e2c7996063a0a"
	I1008 23:02:46.046733  214454 cri.go:89] found id: "3e64781da390812405d95dae933cfc7c3dd1126eb65abc070fb5a61a4a805bd1"
	I1008 23:02:46.046736  214454 cri.go:89] found id: "a02593f091bf4e1b869b3247780e1413dad2f19604ea728bddf11702a939688d"
	I1008 23:02:46.046740  214454 cri.go:89] found id: "2e1d9249276823a522db408755e54fa95f368a0472e8b4462337afea4b239c01"
	I1008 23:02:46.046744  214454 cri.go:89] found id: ""
	I1008 23:02:46.046820  214454 ssh_runner.go:195] Run: sudo runc list -f json
	I1008 23:02:46.061785  214454 out.go:203] 
	W1008 23:02:46.064650  214454 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:02:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T23:02:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1008 23:02:46.064720  214454 out.go:285] * 
	* 
	W1008 23:02:46.070254  214454 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 23:02:46.075158  214454 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-598445 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-598445
helpers_test.go:243: (dbg) docker inspect newest-cni-598445:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0",
	        "Created": "2025-10-08T23:01:43.562370907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T23:02:28.136686601Z",
	            "FinishedAt": "2025-10-08T23:02:27.167506676Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/hosts",
	        "LogPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0-json.log",
	        "Name": "/newest-cni-598445",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-598445:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-598445",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0",
	                "LowerDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-598445",
	                "Source": "/var/lib/docker/volumes/newest-cni-598445/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-598445",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-598445",
	                "name.minikube.sigs.k8s.io": "newest-cni-598445",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f2e5a1e7828c7e856e39451ee47b9ae97b0bf272b83381271047c9d44817e48",
	            "SandboxKey": "/var/run/docker/netns/0f2e5a1e7828",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-598445": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:e2:1b:de:40:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0b1ff28b0c97915ff48c8d0f7665a15b64c8eae67960eb9db0d077a1b90fb71",
	                    "EndpointID": "88ebd7c0796b4746984baef6fc88cddd65c767784a5490522474c04bb2a8aa13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-598445",
	                        "d0d27dc20f53"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445: exit status 2 (342.450266ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-598445 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-598445 logs -n 25: (1.058364293s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	│ stop    │ -p embed-certs-825429 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ stop    │ -p default-k8s-diff-port-779490 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-779490 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-779490 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ image   │ embed-certs-825429 image list --format=json                                                                                                                                                                                                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p embed-certs-825429 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-779490                                                                                                                                                                                                               │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ delete  │ -p embed-certs-825429                                                                                                                                                                                                                         │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ delete  │ -p default-k8s-diff-port-779490                                                                                                                                                                                                               │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ start   │ -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:02 UTC │
	│ delete  │ -p embed-certs-825429                                                                                                                                                                                                                         │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ start   │ -p auto-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-840929                  │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-598445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │                     │
	│ stop    │ -p newest-cni-598445 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-598445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ start   │ -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ image   │ newest-cni-598445 image list --format=json                                                                                                                                                                                                    │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ pause   │ -p newest-cni-598445 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 23:02:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 23:02:27.845123  212817 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:02:27.845260  212817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:02:27.845273  212817 out.go:374] Setting ErrFile to fd 2...
	I1008 23:02:27.845279  212817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:02:27.845554  212817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:02:27.845977  212817 out.go:368] Setting JSON to false
	I1008 23:02:27.846941  212817 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6298,"bootTime":1759958250,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 23:02:27.847012  212817 start.go:141] virtualization:  
	I1008 23:02:27.850037  212817 out.go:179] * [newest-cni-598445] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 23:02:27.853930  212817 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 23:02:27.854065  212817 notify.go:220] Checking for updates...
	I1008 23:02:27.860158  212817 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 23:02:27.863121  212817 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:27.865862  212817 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 23:02:27.868729  212817 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 23:02:27.871707  212817 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 23:02:27.874919  212817 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:27.875598  212817 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 23:02:27.909532  212817 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 23:02:27.909709  212817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:02:27.970938  212817 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 23:02:27.961329711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:02:27.971066  212817 docker.go:318] overlay module found
	I1008 23:02:27.974224  212817 out.go:179] * Using the docker driver based on existing profile
	I1008 23:02:27.977138  212817 start.go:305] selected driver: docker
	I1008 23:02:27.977157  212817 start.go:925] validating driver "docker" against &{Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:02:27.977259  212817 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 23:02:27.978187  212817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:02:28.035102  212817 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 23:02:28.025432092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:02:28.035488  212817 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 23:02:28.035527  212817 cni.go:84] Creating CNI manager for ""
	I1008 23:02:28.035587  212817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:02:28.035631  212817 start.go:349] cluster config:
	{Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:02:28.040738  212817 out.go:179] * Starting "newest-cni-598445" primary control-plane node in "newest-cni-598445" cluster
	I1008 23:02:28.043633  212817 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 23:02:28.046749  212817 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 23:02:28.049800  212817 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:02:28.049839  212817 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 23:02:28.049867  212817 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 23:02:28.049877  212817 cache.go:58] Caching tarball of preloaded images
	I1008 23:02:28.049964  212817 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 23:02:28.049974  212817 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 23:02:28.050105  212817 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/config.json ...
	I1008 23:02:28.076607  212817 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 23:02:28.076632  212817 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 23:02:28.076652  212817 cache.go:232] Successfully downloaded all kic artifacts
	I1008 23:02:28.076677  212817 start.go:360] acquireMachinesLock for newest-cni-598445: {Name:mkd45e8e16e845f1601dda37260d96039774ac83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 23:02:28.076769  212817 start.go:364] duration metric: took 40.247µs to acquireMachinesLock for "newest-cni-598445"
	I1008 23:02:28.076789  212817 start.go:96] Skipping create...Using existing machine configuration
	I1008 23:02:28.076804  212817 fix.go:54] fixHost starting: 
	I1008 23:02:28.077070  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:28.096655  212817 fix.go:112] recreateIfNeeded on newest-cni-598445: state=Stopped err=<nil>
	W1008 23:02:28.096699  212817 fix.go:138] unexpected machine state, will restart: <nil>
	W1008 23:02:28.773352  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	W1008 23:02:31.273111  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	I1008 23:02:28.100159  212817 out.go:252] * Restarting existing docker container for "newest-cni-598445" ...
	I1008 23:02:28.100303  212817 cli_runner.go:164] Run: docker start newest-cni-598445
	I1008 23:02:28.360895  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:28.392622  212817 kic.go:430] container "newest-cni-598445" state is running.
	I1008 23:02:28.393328  212817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:02:28.418199  212817 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/config.json ...
	I1008 23:02:28.418478  212817 machine.go:93] provisionDockerMachine start ...
	I1008 23:02:28.418536  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:28.442421  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:28.442791  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:28.442815  212817 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:02:28.443800  212817 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 23:02:31.589260  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-598445
	
	I1008 23:02:31.589283  212817 ubuntu.go:182] provisioning hostname "newest-cni-598445"
	I1008 23:02:31.589341  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:31.609355  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:31.609809  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:31.609830  212817 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-598445 && echo "newest-cni-598445" | sudo tee /etc/hostname
	I1008 23:02:31.771332  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-598445
	
	I1008 23:02:31.771499  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:31.790840  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:31.791162  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:31.791183  212817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-598445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-598445/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-598445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:02:31.937825  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:02:31.937920  212817 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:02:31.937968  212817 ubuntu.go:190] setting up certificates
	I1008 23:02:31.938003  212817 provision.go:84] configureAuth start
	I1008 23:02:31.938085  212817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:02:31.960394  212817 provision.go:143] copyHostCerts
	I1008 23:02:31.960464  212817 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:02:31.960485  212817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:02:31.960570  212817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:02:31.960681  212817 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:02:31.960697  212817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:02:31.960725  212817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:02:31.960794  212817 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:02:31.960804  212817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:02:31.960831  212817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:02:31.960895  212817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.newest-cni-598445 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-598445]
	I1008 23:02:32.210792  212817 provision.go:177] copyRemoteCerts
	I1008 23:02:32.210865  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:02:32.210922  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.228908  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:32.335467  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:02:32.353013  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 23:02:32.372099  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:02:32.390319  212817 provision.go:87] duration metric: took 452.261183ms to configureAuth
	I1008 23:02:32.390347  212817 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:02:32.390606  212817 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:32.390711  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.409766  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:32.410082  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:32.410103  212817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:02:32.725036  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:02:32.725065  212817 machine.go:96] duration metric: took 4.306576145s to provisionDockerMachine
	I1008 23:02:32.725078  212817 start.go:293] postStartSetup for "newest-cni-598445" (driver="docker")
	I1008 23:02:32.725089  212817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:02:32.725157  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:02:32.725201  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.746345  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:32.854165  212817 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:02:32.857908  212817 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:02:32.857939  212817 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:02:32.857952  212817 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:02:32.858007  212817 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:02:32.858092  212817 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:02:32.858200  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:02:32.865938  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:02:32.884636  212817 start.go:296] duration metric: took 159.542044ms for postStartSetup
	I1008 23:02:32.884735  212817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:02:32.884781  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.903187  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:33.006123  212817 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:02:33.017878  212817 fix.go:56] duration metric: took 4.941076397s for fixHost
	I1008 23:02:33.017903  212817 start.go:83] releasing machines lock for "newest-cni-598445", held for 4.941125054s
	I1008 23:02:33.017981  212817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:02:33.036352  212817 ssh_runner.go:195] Run: cat /version.json
	I1008 23:02:33.036411  212817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:02:33.036493  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:33.036413  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:33.057999  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:33.061421  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:33.259658  212817 ssh_runner.go:195] Run: systemctl --version
	I1008 23:02:33.266674  212817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:02:33.307730  212817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:02:33.312343  212817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:02:33.312455  212817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:02:33.320647  212817 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:02:33.320669  212817 start.go:495] detecting cgroup driver to use...
	I1008 23:02:33.320717  212817 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:02:33.320766  212817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:02:33.338044  212817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:02:33.355104  212817 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:02:33.355199  212817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:02:33.371161  212817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:02:33.385402  212817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:02:33.502865  212817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:02:33.621251  212817 docker.go:234] disabling docker service ...
	I1008 23:02:33.621342  212817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:02:33.638444  212817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:02:33.652108  212817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:02:33.796983  212817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:02:33.926134  212817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:02:33.941384  212817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:02:33.957057  212817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:02:33.957124  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.966819  212817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:02:33.966894  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.976008  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.985217  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.994995  212817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:02:34.007343  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:34.018257  212817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:34.027552  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:34.037358  212817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:02:34.045490  212817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:02:34.054624  212817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:34.177161  212817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:02:34.299834  212817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:02:34.299959  212817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:02:34.304375  212817 start.go:563] Will wait 60s for crictl version
	I1008 23:02:34.304487  212817 ssh_runner.go:195] Run: which crictl
	I1008 23:02:34.308149  212817 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:02:34.336527  212817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:02:34.336622  212817 ssh_runner.go:195] Run: crio --version
	I1008 23:02:34.367482  212817 ssh_runner.go:195] Run: crio --version
	I1008 23:02:34.403747  212817 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:02:34.406666  212817 cli_runner.go:164] Run: docker network inspect newest-cni-598445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:02:34.423175  212817 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 23:02:34.427316  212817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:02:34.440399  212817 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1008 23:02:34.443264  212817 kubeadm.go:883] updating cluster {Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:02:34.443421  212817 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:02:34.443501  212817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:02:34.487106  212817 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:02:34.487134  212817 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:02:34.487215  212817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:02:34.518998  212817 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:02:34.519026  212817 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:02:34.519037  212817 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 23:02:34.519164  212817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-598445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:02:34.519277  212817 ssh_runner.go:195] Run: crio config
	I1008 23:02:34.594663  212817 cni.go:84] Creating CNI manager for ""
	I1008 23:02:34.594689  212817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:02:34.594712  212817 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1008 23:02:34.594738  212817 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-598445 NodeName:newest-cni-598445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:02:34.594889  212817 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-598445"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:02:34.594972  212817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:02:34.605092  212817 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:02:34.605275  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:02:34.614139  212817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 23:02:34.628285  212817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:02:34.641238  212817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1008 23:02:34.654859  212817 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:02:34.658719  212817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:02:34.669334  212817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:34.793088  212817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:02:34.810703  212817 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445 for IP: 192.168.85.2
	I1008 23:02:34.810728  212817 certs.go:195] generating shared ca certs ...
	I1008 23:02:34.810744  212817 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:34.810969  212817 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:02:34.811041  212817 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:02:34.811055  212817 certs.go:257] generating profile certs ...
	I1008 23:02:34.811167  212817 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.key
	I1008 23:02:34.811257  212817 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key.1a399b11
	I1008 23:02:34.811338  212817 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key
	I1008 23:02:34.811476  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:02:34.811534  212817 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:02:34.811549  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:02:34.811578  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:02:34.811635  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:02:34.811663  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:02:34.811728  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:02:34.812399  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:02:34.834729  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:02:34.853547  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:02:34.871563  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:02:34.895254  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 23:02:34.913155  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 23:02:34.932604  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:02:34.951931  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:02:34.988055  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:02:35.020450  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:02:35.050591  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:02:35.072539  212817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:02:35.087992  212817 ssh_runner.go:195] Run: openssl version
	I1008 23:02:35.095325  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:02:35.104682  212817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:02:35.108996  212817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:02:35.109075  212817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:02:35.156427  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:02:35.164815  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:02:35.174818  212817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:02:35.178931  212817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:02:35.178996  212817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:02:35.220434  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:02:35.229272  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:02:35.237888  212817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:02:35.241846  212817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:02:35.241950  212817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:02:35.283354  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:02:35.291498  212817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:02:35.295580  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:02:35.342488  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:02:35.384934  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:02:35.426847  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:02:35.470747  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:02:35.516733  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:02:35.565033  212817 kubeadm.go:400] StartCluster: {Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:02:35.565119  212817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:02:35.565185  212817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:02:35.641047  212817 cri.go:89] found id: ""
	I1008 23:02:35.641122  212817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:02:35.651383  212817 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:02:35.651445  212817 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:02:35.651524  212817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:02:35.689189  212817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:02:35.689948  212817 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-598445" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:35.690299  212817 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-598445" cluster setting kubeconfig missing "newest-cni-598445" context setting]
	I1008 23:02:35.690797  212817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:35.692559  212817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:02:35.710207  212817 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 23:02:35.710289  212817 kubeadm.go:601] duration metric: took 58.82148ms to restartPrimaryControlPlane
	I1008 23:02:35.710313  212817 kubeadm.go:402] duration metric: took 145.288624ms to StartCluster
	I1008 23:02:35.710358  212817 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:35.710488  212817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:35.711562  212817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:35.711861  212817 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:02:35.712166  212817 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:35.712224  212817 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:02:35.712298  212817 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-598445"
	I1008 23:02:35.712316  212817 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-598445"
	W1008 23:02:35.712322  212817 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:02:35.712343  212817 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:35.712787  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.712929  212817 addons.go:69] Setting dashboard=true in profile "newest-cni-598445"
	I1008 23:02:35.712946  212817 addons.go:238] Setting addon dashboard=true in "newest-cni-598445"
	W1008 23:02:35.712978  212817 addons.go:247] addon dashboard should already be in state true
	I1008 23:02:35.713002  212817 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:35.713375  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.713674  212817 addons.go:69] Setting default-storageclass=true in profile "newest-cni-598445"
	I1008 23:02:35.713695  212817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-598445"
	I1008 23:02:35.713960  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.718306  212817 out.go:179] * Verifying Kubernetes components...
	I1008 23:02:35.722769  212817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:35.757672  212817 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:02:35.760659  212817 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:02:35.763718  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:02:35.763740  212817 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:02:35.763811  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:35.778039  212817 addons.go:238] Setting addon default-storageclass=true in "newest-cni-598445"
	W1008 23:02:35.778061  212817 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:02:35.778086  212817 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:35.778528  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.788710  212817 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1008 23:02:33.773001  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	W1008 23:02:35.776876  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	I1008 23:02:35.791570  212817 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:35.791593  212817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:02:35.791658  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:35.843848  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:35.851149  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:35.857897  212817 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:35.857918  212817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:02:35.857982  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:35.886906  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:36.088088  212817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:02:36.124126  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:02:36.124198  212817 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:02:36.133205  212817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:36.138856  212817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:36.142971  212817 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:02:36.143095  212817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:02:36.181741  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:02:36.181818  212817 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:02:36.254236  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:02:36.254310  212817 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:02:36.322260  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:02:36.322280  212817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:02:36.390606  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:02:36.390626  212817 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:02:36.424721  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:02:36.424741  212817 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:02:36.444297  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:02:36.444367  212817 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:02:36.462864  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:02:36.462940  212817 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:02:36.483116  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:02:36.483187  212817 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:02:36.503418  212817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1008 23:02:38.272306  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	W1008 23:02:40.272582  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	I1008 23:02:42.409536  212817 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.27625014s)
	I1008 23:02:42.409598  212817 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.270671234s)
	I1008 23:02:42.409924  212817 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.266797959s)
	I1008 23:02:42.409942  212817 api_server.go:72] duration metric: took 6.698023539s to wait for apiserver process to appear ...
	I1008 23:02:42.409948  212817 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:02:42.409961  212817 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 23:02:42.425248  212817 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 23:02:42.425279  212817 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 23:02:42.648411  212817 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.144902419s)
	I1008 23:02:42.651702  212817 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-598445 addons enable metrics-server
	
	I1008 23:02:42.654552  212817 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1008 23:02:42.657438  212817 addons.go:514] duration metric: took 6.945186001s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1008 23:02:42.910880  212817 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 23:02:42.922494  212817 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 23:02:42.923814  212817 api_server.go:141] control plane version: v1.34.1
	I1008 23:02:42.923837  212817 api_server.go:131] duration metric: took 513.880749ms to wait for apiserver health ...
	I1008 23:02:42.923846  212817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:02:42.932245  212817 system_pods.go:59] 8 kube-system pods found
	I1008 23:02:42.932347  212817 system_pods.go:61] "coredns-66bc5c9577-2qjrv" [ec8d975b-2220-48dc-9c8c-65169391c742] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1008 23:02:42.932404  212817 system_pods.go:61] "etcd-newest-cni-598445" [474854ed-7e7c-49d6-9fb8-b572780f4e37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 23:02:42.932432  212817 system_pods.go:61] "kindnet-26wwk" [4c47d037-c2a6-404d-82fd-1efa6e55ad21] Running
	I1008 23:02:42.932476  212817 system_pods.go:61] "kube-apiserver-newest-cni-598445" [7fc58799-0bb0-45ee-a53c-b51583ec84ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:02:42.932503  212817 system_pods.go:61] "kube-controller-manager-newest-cni-598445" [c0fdc1df-8d0b-4bb3-a383-9d9fd102b6a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:02:42.932531  212817 system_pods.go:61] "kube-proxy-qjt47" [d3bc119f-422b-4196-a3e2-c9daa5264ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 23:02:42.932568  212817 system_pods.go:61] "kube-scheduler-newest-cni-598445" [c795f706-3409-4b43-b1f8-2f3a465a03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:02:42.932602  212817 system_pods.go:61] "storage-provisioner" [03aabe9b-e840-4770-bff2-e17a5caad244] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1008 23:02:42.932624  212817 system_pods.go:74] duration metric: took 8.770752ms to wait for pod list to return data ...
	I1008 23:02:42.932664  212817 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:02:42.935803  212817 default_sa.go:45] found service account: "default"
	I1008 23:02:42.935886  212817 default_sa.go:55] duration metric: took 3.192265ms for default service account to be created ...
	I1008 23:02:42.935928  212817 kubeadm.go:586] duration metric: took 7.224006777s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 23:02:42.935992  212817 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:02:42.941003  212817 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:02:42.941103  212817 node_conditions.go:123] node cpu capacity is 2
	I1008 23:02:42.941148  212817 node_conditions.go:105] duration metric: took 5.135364ms to run NodePressure ...
	I1008 23:02:42.941208  212817 start.go:241] waiting for startup goroutines ...
	I1008 23:02:42.941231  212817 start.go:246] waiting for cluster config update ...
	I1008 23:02:42.941272  212817 start.go:255] writing updated cluster config ...
	I1008 23:02:42.941655  212817 ssh_runner.go:195] Run: rm -f paused
	I1008 23:02:43.057402  212817 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:02:43.060682  212817 out.go:179] * Done! kubectl is now configured to use "newest-cni-598445" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.572392994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.576196828Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fe830307-213b-4491-80a8-de0e5ba7c883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.587030984Z" level=info msg="Ran pod sandbox 7f3bbf6738026cd4c6a96a5b757ad88f132481e96e3d8c78ca352f4605d693ec with infra container: kube-system/kindnet-26wwk/POD" id=fe830307-213b-4491-80a8-de0e5ba7c883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.588380992Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-qjt47/POD" id=fe238f8d-1c07-4c6c-9437-ec9907832ad1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.588535989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.611704925Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fe238f8d-1c07-4c6c-9437-ec9907832ad1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.624895027Z" level=info msg="Ran pod sandbox 282dd48acd26e952ffed3cdd0643c676872f36ccb78c1ebeb3bec096ff942f35 with infra container: kube-system/kube-proxy-qjt47/POD" id=fe238f8d-1c07-4c6c-9437-ec9907832ad1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.62999231Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=74401eb2-ac4e-4a2e-8738-2f7c98ef4790 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.646161865Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=efba4ffb-72e1-4df4-92c1-fc9ab5c2c619 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.658332335Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7efbbf01-0af0-4f42-b6e0-c987f69800d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.65920817Z" level=info msg="Creating container: kube-system/kindnet-26wwk/kindnet-cni" id=80b02820-9521-4714-b40c-1cb05f6fec46 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.674622647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.682724623Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9cb82be5-216b-4a5d-b654-2bc08d6be8c2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.688275688Z" level=info msg="Creating container: kube-system/kube-proxy-qjt47/kube-proxy" id=59ccef1e-a6d4-4c06-b151-f85f3c80ee70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.688539643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.709236669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.713175684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.726320854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.727494408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.779900505Z" level=info msg="Created container 7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3: kube-system/kindnet-26wwk/kindnet-cni" id=80b02820-9521-4714-b40c-1cb05f6fec46 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.7929315Z" level=info msg="Starting container: 7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3" id=2bcf2e99-0888-4d0f-9a9f-f63bf69e99a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.801456948Z" level=info msg="Started container" PID=1055 containerID=7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3 description=kube-system/kindnet-26wwk/kindnet-cni id=2bcf2e99-0888-4d0f-9a9f-f63bf69e99a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f3bbf6738026cd4c6a96a5b757ad88f132481e96e3d8c78ca352f4605d693ec
	Oct 08 23:02:42 newest-cni-598445 crio[611]: time="2025-10-08T23:02:42.392291974Z" level=info msg="Created container 8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061: kube-system/kube-proxy-qjt47/kube-proxy" id=59ccef1e-a6d4-4c06-b151-f85f3c80ee70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:42 newest-cni-598445 crio[611]: time="2025-10-08T23:02:42.393159694Z" level=info msg="Starting container: 8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061" id=9eebee4e-9bff-41f4-aca4-4a437343fafd name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:02:42 newest-cni-598445 crio[611]: time="2025-10-08T23:02:42.39691268Z" level=info msg="Started container" PID=1056 containerID=8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061 description=kube-system/kube-proxy-qjt47/kube-proxy id=9eebee4e-9bff-41f4-aca4-4a437343fafd name=/runtime.v1.RuntimeService/StartContainer sandboxID=282dd48acd26e952ffed3cdd0643c676872f36ccb78c1ebeb3bec096ff942f35
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7d0704f1922f2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   7f3bbf6738026       kindnet-26wwk                               kube-system
	8cf8c0d7516b4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   282dd48acd26e       kube-proxy-qjt47                            kube-system
	0bc68321c4680       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   cb6784edc9742       kube-controller-manager-newest-cni-598445   kube-system
	3e64781da3908       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   e0ec2562f3b7c       kube-scheduler-newest-cni-598445            kube-system
	a02593f091bf4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   7f5f04706b5ed       etcd-newest-cni-598445                      kube-system
	2e1d924927682       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   06fc62c0880c7       kube-apiserver-newest-cni-598445            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-598445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-598445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=newest-cni-598445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T23_02_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 23:02:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-598445
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 23:02:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-598445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f5c1dd74c18490bbcc103a1ab73ab27
	  System UUID:                fc86293c-d5bb-4314-9225-e89a6cd1ff6e
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-598445                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-26wwk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-598445             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-598445    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-qjt47                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-598445             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 43s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           26s                node-controller  Node newest-cni-598445 event: Registered Node newest-cni-598445 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-598445 event: Registered Node newest-cni-598445 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:00] overlayfs: idmapped layers are currently not supported
	[  +1.568442] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:02] overlayfs: idmapped layers are currently not supported
	[  +3.214273] overlayfs: idmapped layers are currently not supported
	[ +30.544324] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a02593f091bf4e1b869b3247780e1413dad2f19604ea728bddf11702a939688d] <==
	{"level":"warn","ts":"2025-10-08T23:02:38.606323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.620500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.651341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.667133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.684557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.713088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.731888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.746447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.766241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.791579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.808676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.827760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.842904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.870298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.888793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.907358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.928196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.941501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.961100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.991454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.045421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.094829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.126416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.158764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.293965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60392","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:02:47 up  1:45,  0 user,  load average: 6.33, 3.64, 2.46
	Linux newest-cni-598445 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3] <==
	I1008 23:02:41.934840       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 23:02:41.935062       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 23:02:41.935156       1 main.go:148] setting mtu 1500 for CNI 
	I1008 23:02:41.935213       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 23:02:41.935252       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T23:02:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 23:02:42.266565       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 23:02:42.267420       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 23:02:42.267450       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 23:02:42.273880       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [2e1d9249276823a522db408755e54fa95f368a0472e8b4462337afea4b239c01] <==
	I1008 23:02:40.875709       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1008 23:02:40.875770       1 policy_source.go:240] refreshing policies
	I1008 23:02:40.905982       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 23:02:40.908622       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 23:02:40.925326       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 23:02:40.925583       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1008 23:02:40.973207       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 23:02:40.973786       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 23:02:40.973841       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 23:02:40.999176       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 23:02:40.999496       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 23:02:40.999611       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1008 23:02:41.066935       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 23:02:41.177838       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 23:02:41.400558       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 23:02:41.850832       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 23:02:42.025969       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 23:02:42.196571       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 23:02:42.246345       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 23:02:42.559027       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.38.244"}
	I1008 23:02:42.642381       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.189.41"}
	I1008 23:02:44.799920       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 23:02:45.104731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 23:02:45.220214       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 23:02:45.303695       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0bc68321c46808e03761c9cb44ba461f8c6e1bc97bd15431fd9e2c7996063a0a] <==
	I1008 23:02:44.752343       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 23:02:44.755798       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 23:02:44.756137       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:02:44.759543       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 23:02:44.760839       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 23:02:44.761463       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 23:02:44.766601       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 23:02:44.766807       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 23:02:44.768744       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 23:02:44.790250       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 23:02:44.790349       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 23:02:44.790467       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-598445"
	I1008 23:02:44.790520       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 23:02:44.790565       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 23:02:44.791578       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 23:02:44.791591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 23:02:44.793364       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 23:02:44.793600       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1008 23:02:44.794937       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 23:02:44.796191       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 23:02:44.797780       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 23:02:44.800091       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 23:02:44.804931       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 23:02:44.807964       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 23:02:44.818865       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061] <==
	I1008 23:02:43.082366       1 server_linux.go:53] "Using iptables proxy"
	I1008 23:02:43.794985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 23:02:43.896399       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 23:02:43.905489       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 23:02:43.918576       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 23:02:44.206011       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 23:02:44.206071       1 server_linux.go:132] "Using iptables Proxier"
	I1008 23:02:44.209967       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 23:02:44.210284       1 server.go:527] "Version info" version="v1.34.1"
	I1008 23:02:44.210308       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:02:44.211904       1 config.go:200] "Starting service config controller"
	I1008 23:02:44.211934       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 23:02:44.211966       1 config.go:106] "Starting endpoint slice config controller"
	I1008 23:02:44.211980       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 23:02:44.211992       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 23:02:44.211997       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 23:02:44.212621       1 config.go:309] "Starting node config controller"
	I1008 23:02:44.212640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 23:02:44.212647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 23:02:44.312236       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 23:02:44.312241       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 23:02:44.312257       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3e64781da390812405d95dae933cfc7c3dd1126eb65abc070fb5a61a4a805bd1] <==
	I1008 23:02:38.087233       1 serving.go:386] Generated self-signed cert in-memory
	I1008 23:02:44.418128       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 23:02:44.418164       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:02:44.425821       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 23:02:44.425942       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 23:02:44.425968       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 23:02:44.426011       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 23:02:44.432177       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:02:44.437688       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:02:44.438302       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:02:44.442581       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:02:44.526801       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 23:02:44.539111       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:02:44.543275       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 08 23:02:38 newest-cni-598445 kubelet[730]: E1008 23:02:38.976884     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-598445\" not found" node="newest-cni-598445"
	Oct 08 23:02:39 newest-cni-598445 kubelet[730]: E1008 23:02:39.667544     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-598445\" not found" node="newest-cni-598445"
	Oct 08 23:02:40 newest-cni-598445 kubelet[730]: I1008 23:02:40.561280     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-598445"
	Oct 08 23:02:40 newest-cni-598445 kubelet[730]: I1008 23:02:40.945386     730 apiserver.go:52] "Watching apiserver"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.058478     730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.106878     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3bc119f-422b-4196-a3e2-c9daa5264ebc-xtables-lock\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107153     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-lib-modules\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107270     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-cni-cfg\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107359     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-xtables-lock\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107458     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3bc119f-422b-4196-a3e2-c9daa5264ebc-lib-modules\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.147288     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-598445\" already exists" pod="kube-system/etcd-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.147471     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.148213     730 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.148418     730 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.148525     730 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.150862     730 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.289195     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-598445\" already exists" pod="kube-system/kube-apiserver-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.289434     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.306875     730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.438302     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-598445\" already exists" pod="kube-system/kube-controller-manager-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.438485     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.499525     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-598445\" already exists" pod="kube-system/kube-scheduler-newest-cni-598445"
	Oct 08 23:02:44 newest-cni-598445 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 23:02:44 newest-cni-598445 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 23:02:44 newest-cni-598445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-598445 -n newest-cni-598445
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-598445 -n newest-cni-598445: exit status 2 (376.612965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-598445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59: exit status 1 (93.397221ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-2qjrv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-l7tgw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8pc59" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-598445
helpers_test.go:243: (dbg) docker inspect newest-cni-598445:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0",
	        "Created": "2025-10-08T23:01:43.562370907Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-08T23:02:28.136686601Z",
	            "FinishedAt": "2025-10-08T23:02:27.167506676Z"
	        },
	        "Image": "sha256:0e619a02aa1f399c748a05785ca381533d7a01df724b62f3a9ddd9e288db8f6f",
	        "ResolvConfPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/hosts",
	        "LogPath": "/var/lib/docker/containers/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0/d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0-json.log",
	        "Name": "/newest-cni-598445",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-598445:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-598445",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d0d27dc20f53b94a5024e841ee21a131b180ba258999bf1e9b2fdcbb771eb2b0",
	                "LowerDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9-init/diff:/var/lib/docker/overlay2/ca42b10a231c28e7d40f3b25e04692e36f0276b8b9f4ea012b2f02a2f4b58871/diff",
	                "MergedDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/283de2f27f3bea0cd98a3402d7e380848a89cbcac8b8a542a603d01bed0476f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-598445",
	                "Source": "/var/lib/docker/volumes/newest-cni-598445/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-598445",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-598445",
	                "name.minikube.sigs.k8s.io": "newest-cni-598445",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f2e5a1e7828c7e856e39451ee47b9ae97b0bf272b83381271047c9d44817e48",
	            "SandboxKey": "/var/run/docker/netns/0f2e5a1e7828",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-598445": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:e2:1b:de:40:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0b1ff28b0c97915ff48c8d0f7665a15b64c8eae67960eb9db0d077a1b90fb71",
	                    "EndpointID": "88ebd7c0796b4746984baef6fc88cddd65c767784a5490522474c04bb2a8aa13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-598445",
	                        "d0d27dc20f53"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445: exit status 2 (348.685269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-598445 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-598445 logs -n 25: (1.116234531s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 22:58 UTC │ 08 Oct 25 22:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-825429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 22:59 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-779490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │                     │
	│ stop    │ -p embed-certs-825429 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ stop    │ -p default-k8s-diff-port-779490 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ addons  │ enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:00 UTC │
	│ start   │ -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:00 UTC │ 08 Oct 25 23:01 UTC │
	│ image   │ default-k8s-diff-port-779490 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p default-k8s-diff-port-779490 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ image   │ embed-certs-825429 image list --format=json                                                                                                                                                                                                   │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ pause   │ -p embed-certs-825429 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-779490                                                                                                                                                                                                               │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ delete  │ -p embed-certs-825429                                                                                                                                                                                                                         │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ delete  │ -p default-k8s-diff-port-779490                                                                                                                                                                                                               │ default-k8s-diff-port-779490 │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ start   │ -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:02 UTC │
	│ delete  │ -p embed-certs-825429                                                                                                                                                                                                                         │ embed-certs-825429           │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │ 08 Oct 25 23:01 UTC │
	│ start   │ -p auto-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-840929                  │ jenkins │ v1.37.0 │ 08 Oct 25 23:01 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-598445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │                     │
	│ stop    │ -p newest-cni-598445 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-598445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ start   │ -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ image   │ newest-cni-598445 image list --format=json                                                                                                                                                                                                    │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │ 08 Oct 25 23:02 UTC │
	│ pause   │ -p newest-cni-598445 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-598445            │ jenkins │ v1.37.0 │ 08 Oct 25 23:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 23:02:27
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 23:02:27.845123  212817 out.go:360] Setting OutFile to fd 1 ...
	I1008 23:02:27.845260  212817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:02:27.845273  212817 out.go:374] Setting ErrFile to fd 2...
	I1008 23:02:27.845279  212817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 23:02:27.845554  212817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 23:02:27.845977  212817 out.go:368] Setting JSON to false
	I1008 23:02:27.846941  212817 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6298,"bootTime":1759958250,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 23:02:27.847012  212817 start.go:141] virtualization:  
	I1008 23:02:27.850037  212817 out.go:179] * [newest-cni-598445] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 23:02:27.853930  212817 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 23:02:27.854065  212817 notify.go:220] Checking for updates...
	I1008 23:02:27.860158  212817 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 23:02:27.863121  212817 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:27.865862  212817 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 23:02:27.868729  212817 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 23:02:27.871707  212817 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 23:02:27.874919  212817 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:27.875598  212817 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 23:02:27.909532  212817 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 23:02:27.909709  212817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:02:27.970938  212817 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 23:02:27.961329711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:02:27.971066  212817 docker.go:318] overlay module found
	I1008 23:02:27.974224  212817 out.go:179] * Using the docker driver based on existing profile
	I1008 23:02:27.977138  212817 start.go:305] selected driver: docker
	I1008 23:02:27.977157  212817 start.go:925] validating driver "docker" against &{Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:02:27.977259  212817 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 23:02:27.978187  212817 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 23:02:28.035102  212817 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 23:02:28.025432092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 23:02:28.035488  212817 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 23:02:28.035527  212817 cni.go:84] Creating CNI manager for ""
	I1008 23:02:28.035587  212817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:02:28.035631  212817 start.go:349] cluster config:
	{Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:02:28.040738  212817 out.go:179] * Starting "newest-cni-598445" primary control-plane node in "newest-cni-598445" cluster
	I1008 23:02:28.043633  212817 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 23:02:28.046749  212817 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1008 23:02:28.049800  212817 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:02:28.049839  212817 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 23:02:28.049867  212817 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 23:02:28.049877  212817 cache.go:58] Caching tarball of preloaded images
	I1008 23:02:28.049964  212817 preload.go:233] Found /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1008 23:02:28.049974  212817 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1008 23:02:28.050105  212817 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/config.json ...
	I1008 23:02:28.076607  212817 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1008 23:02:28.076632  212817 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1008 23:02:28.076652  212817 cache.go:232] Successfully downloaded all kic artifacts
	I1008 23:02:28.076677  212817 start.go:360] acquireMachinesLock for newest-cni-598445: {Name:mkd45e8e16e845f1601dda37260d96039774ac83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 23:02:28.076769  212817 start.go:364] duration metric: took 40.247µs to acquireMachinesLock for "newest-cni-598445"
	I1008 23:02:28.076789  212817 start.go:96] Skipping create...Using existing machine configuration
	I1008 23:02:28.076804  212817 fix.go:54] fixHost starting: 
	I1008 23:02:28.077070  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:28.096655  212817 fix.go:112] recreateIfNeeded on newest-cni-598445: state=Stopped err=<nil>
	W1008 23:02:28.096699  212817 fix.go:138] unexpected machine state, will restart: <nil>
	W1008 23:02:28.773352  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	W1008 23:02:31.273111  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	I1008 23:02:28.100159  212817 out.go:252] * Restarting existing docker container for "newest-cni-598445" ...
	I1008 23:02:28.100303  212817 cli_runner.go:164] Run: docker start newest-cni-598445
	I1008 23:02:28.360895  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:28.392622  212817 kic.go:430] container "newest-cni-598445" state is running.
	I1008 23:02:28.393328  212817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:02:28.418199  212817 profile.go:143] Saving config to /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/config.json ...
	I1008 23:02:28.418478  212817 machine.go:93] provisionDockerMachine start ...
	I1008 23:02:28.418536  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:28.442421  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:28.442791  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:28.442815  212817 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 23:02:28.443800  212817 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1008 23:02:31.589260  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-598445
	
	I1008 23:02:31.589283  212817 ubuntu.go:182] provisioning hostname "newest-cni-598445"
	I1008 23:02:31.589341  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:31.609355  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:31.609809  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:31.609830  212817 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-598445 && echo "newest-cni-598445" | sudo tee /etc/hostname
	I1008 23:02:31.771332  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-598445
	
	I1008 23:02:31.771499  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:31.790840  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:31.791162  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:31.791183  212817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-598445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-598445/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-598445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 23:02:31.937825  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 23:02:31.937920  212817 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21681-2481/.minikube CaCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21681-2481/.minikube}
	I1008 23:02:31.937968  212817 ubuntu.go:190] setting up certificates
	I1008 23:02:31.938003  212817 provision.go:84] configureAuth start
	I1008 23:02:31.938085  212817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:02:31.960394  212817 provision.go:143] copyHostCerts
	I1008 23:02:31.960464  212817 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem, removing ...
	I1008 23:02:31.960485  212817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem
	I1008 23:02:31.960570  212817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/key.pem (1675 bytes)
	I1008 23:02:31.960681  212817 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem, removing ...
	I1008 23:02:31.960697  212817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem
	I1008 23:02:31.960725  212817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/ca.pem (1082 bytes)
	I1008 23:02:31.960794  212817 exec_runner.go:144] found /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem, removing ...
	I1008 23:02:31.960804  212817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem
	I1008 23:02:31.960831  212817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21681-2481/.minikube/cert.pem (1123 bytes)
	I1008 23:02:31.960895  212817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem org=jenkins.newest-cni-598445 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-598445]
	I1008 23:02:32.210792  212817 provision.go:177] copyRemoteCerts
	I1008 23:02:32.210865  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 23:02:32.210922  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.228908  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:32.335467  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1008 23:02:32.353013  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1008 23:02:32.372099  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 23:02:32.390319  212817 provision.go:87] duration metric: took 452.261183ms to configureAuth
	I1008 23:02:32.390347  212817 ubuntu.go:206] setting minikube options for container-runtime
	I1008 23:02:32.390606  212817 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:32.390711  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.409766  212817 main.go:141] libmachine: Using SSH client type: native
	I1008 23:02:32.410082  212817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33101 <nil> <nil>}
	I1008 23:02:32.410103  212817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1008 23:02:32.725036  212817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1008 23:02:32.725065  212817 machine.go:96] duration metric: took 4.306576145s to provisionDockerMachine
	I1008 23:02:32.725078  212817 start.go:293] postStartSetup for "newest-cni-598445" (driver="docker")
	I1008 23:02:32.725089  212817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 23:02:32.725157  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 23:02:32.725201  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.746345  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:32.854165  212817 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 23:02:32.857908  212817 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1008 23:02:32.857939  212817 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1008 23:02:32.857952  212817 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/addons for local assets ...
	I1008 23:02:32.858007  212817 filesync.go:126] Scanning /home/jenkins/minikube-integration/21681-2481/.minikube/files for local assets ...
	I1008 23:02:32.858092  212817 filesync.go:149] local asset: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem -> 42862.pem in /etc/ssl/certs
	I1008 23:02:32.858200  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 23:02:32.865938  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:02:32.884636  212817 start.go:296] duration metric: took 159.542044ms for postStartSetup
	I1008 23:02:32.884735  212817 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 23:02:32.884781  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:32.903187  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:33.006123  212817 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1008 23:02:33.017878  212817 fix.go:56] duration metric: took 4.941076397s for fixHost
	I1008 23:02:33.017903  212817 start.go:83] releasing machines lock for "newest-cni-598445", held for 4.941125054s
	I1008 23:02:33.017981  212817 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-598445
	I1008 23:02:33.036352  212817 ssh_runner.go:195] Run: cat /version.json
	I1008 23:02:33.036411  212817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 23:02:33.036493  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:33.036413  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:33.057999  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:33.061421  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:33.259658  212817 ssh_runner.go:195] Run: systemctl --version
	I1008 23:02:33.266674  212817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1008 23:02:33.307730  212817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 23:02:33.312343  212817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 23:02:33.312455  212817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1008 23:02:33.320647  212817 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1008 23:02:33.320669  212817 start.go:495] detecting cgroup driver to use...
	I1008 23:02:33.320717  212817 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1008 23:02:33.320766  212817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 23:02:33.338044  212817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 23:02:33.355104  212817 docker.go:218] disabling cri-docker service (if available) ...
	I1008 23:02:33.355199  212817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1008 23:02:33.371161  212817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1008 23:02:33.385402  212817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1008 23:02:33.502865  212817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1008 23:02:33.621251  212817 docker.go:234] disabling docker service ...
	I1008 23:02:33.621342  212817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1008 23:02:33.638444  212817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1008 23:02:33.652108  212817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1008 23:02:33.796983  212817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1008 23:02:33.926134  212817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1008 23:02:33.941384  212817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 23:02:33.957057  212817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1008 23:02:33.957124  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.966819  212817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1008 23:02:33.966894  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.976008  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.985217  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:33.994995  212817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 23:02:34.007343  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:34.018257  212817 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:34.027552  212817 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1008 23:02:34.037358  212817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 23:02:34.045490  212817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 23:02:34.054624  212817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:34.177161  212817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1008 23:02:34.299834  212817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1008 23:02:34.299959  212817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1008 23:02:34.304375  212817 start.go:563] Will wait 60s for crictl version
	I1008 23:02:34.304487  212817 ssh_runner.go:195] Run: which crictl
	I1008 23:02:34.308149  212817 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1008 23:02:34.336527  212817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1008 23:02:34.336622  212817 ssh_runner.go:195] Run: crio --version
	I1008 23:02:34.367482  212817 ssh_runner.go:195] Run: crio --version
	I1008 23:02:34.403747  212817 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1008 23:02:34.406666  212817 cli_runner.go:164] Run: docker network inspect newest-cni-598445 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1008 23:02:34.423175  212817 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1008 23:02:34.427316  212817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:02:34.440399  212817 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1008 23:02:34.443264  212817 kubeadm.go:883] updating cluster {Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1008 23:02:34.443421  212817 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 23:02:34.443501  212817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:02:34.487106  212817 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:02:34.487134  212817 crio.go:433] Images already preloaded, skipping extraction
	I1008 23:02:34.487215  212817 ssh_runner.go:195] Run: sudo crictl images --output json
	I1008 23:02:34.518998  212817 crio.go:514] all images are preloaded for cri-o runtime.
	I1008 23:02:34.519026  212817 cache_images.go:85] Images are preloaded, skipping loading
	I1008 23:02:34.519037  212817 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1008 23:02:34.519164  212817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-598445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 23:02:34.519277  212817 ssh_runner.go:195] Run: crio config
	I1008 23:02:34.594663  212817 cni.go:84] Creating CNI manager for ""
	I1008 23:02:34.594689  212817 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 23:02:34.594712  212817 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1008 23:02:34.594738  212817 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-598445 NodeName:newest-cni-598445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 23:02:34.594889  212817 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-598445"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 23:02:34.594972  212817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1008 23:02:34.605092  212817 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 23:02:34.605275  212817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 23:02:34.614139  212817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1008 23:02:34.628285  212817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 23:02:34.641238  212817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1008 23:02:34.654859  212817 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1008 23:02:34.658719  212817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 23:02:34.669334  212817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:34.793088  212817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:02:34.810703  212817 certs.go:69] Setting up /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445 for IP: 192.168.85.2
	I1008 23:02:34.810728  212817 certs.go:195] generating shared ca certs ...
	I1008 23:02:34.810744  212817 certs.go:227] acquiring lock for ca certs: {Name:mkab9dc61d515c295ed0e9970b6a734168f4ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:34.810969  212817 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key
	I1008 23:02:34.811041  212817 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key
	I1008 23:02:34.811055  212817 certs.go:257] generating profile certs ...
	I1008 23:02:34.811167  212817 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/client.key
	I1008 23:02:34.811257  212817 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key.1a399b11
	I1008 23:02:34.811338  212817 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key
	I1008 23:02:34.811476  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem (1338 bytes)
	W1008 23:02:34.811534  212817 certs.go:480] ignoring /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286_empty.pem, impossibly tiny 0 bytes
	I1008 23:02:34.811549  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca-key.pem (1671 bytes)
	I1008 23:02:34.811578  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/ca.pem (1082 bytes)
	I1008 23:02:34.811635  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/cert.pem (1123 bytes)
	I1008 23:02:34.811663  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/certs/key.pem (1675 bytes)
	I1008 23:02:34.811728  212817 certs.go:484] found cert: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem (1708 bytes)
	I1008 23:02:34.812399  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 23:02:34.834729  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1008 23:02:34.853547  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 23:02:34.871563  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1008 23:02:34.895254  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1008 23:02:34.913155  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1008 23:02:34.932604  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 23:02:34.951931  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/newest-cni-598445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1008 23:02:34.988055  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/ssl/certs/42862.pem --> /usr/share/ca-certificates/42862.pem (1708 bytes)
	I1008 23:02:35.020450  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 23:02:35.050591  212817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21681-2481/.minikube/certs/4286.pem --> /usr/share/ca-certificates/4286.pem (1338 bytes)
	I1008 23:02:35.072539  212817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 23:02:35.087992  212817 ssh_runner.go:195] Run: openssl version
	I1008 23:02:35.095325  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 23:02:35.104682  212817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:02:35.108996  212817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 21:52 /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:02:35.109075  212817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 23:02:35.156427  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 23:02:35.164815  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4286.pem && ln -fs /usr/share/ca-certificates/4286.pem /etc/ssl/certs/4286.pem"
	I1008 23:02:35.174818  212817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4286.pem
	I1008 23:02:35.178931  212817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 21:58 /usr/share/ca-certificates/4286.pem
	I1008 23:02:35.178996  212817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4286.pem
	I1008 23:02:35.220434  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4286.pem /etc/ssl/certs/51391683.0"
	I1008 23:02:35.229272  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42862.pem && ln -fs /usr/share/ca-certificates/42862.pem /etc/ssl/certs/42862.pem"
	I1008 23:02:35.237888  212817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42862.pem
	I1008 23:02:35.241846  212817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 21:58 /usr/share/ca-certificates/42862.pem
	I1008 23:02:35.241950  212817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42862.pem
	I1008 23:02:35.283354  212817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42862.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 23:02:35.291498  212817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 23:02:35.295580  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 23:02:35.342488  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 23:02:35.384934  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 23:02:35.426847  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 23:02:35.470747  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 23:02:35.516733  212817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 23:02:35.565033  212817 kubeadm.go:400] StartCluster: {Name:newest-cni-598445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-598445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 23:02:35.565119  212817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1008 23:02:35.565185  212817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1008 23:02:35.641047  212817 cri.go:89] found id: ""
	I1008 23:02:35.641122  212817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 23:02:35.651383  212817 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1008 23:02:35.651445  212817 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1008 23:02:35.651524  212817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 23:02:35.689189  212817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 23:02:35.689948  212817 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-598445" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:35.690299  212817 kubeconfig.go:62] /home/jenkins/minikube-integration/21681-2481/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-598445" cluster setting kubeconfig missing "newest-cni-598445" context setting]
	I1008 23:02:35.690797  212817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:35.692559  212817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 23:02:35.710207  212817 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1008 23:02:35.710289  212817 kubeadm.go:601] duration metric: took 58.82148ms to restartPrimaryControlPlane
	I1008 23:02:35.710313  212817 kubeadm.go:402] duration metric: took 145.288624ms to StartCluster
	I1008 23:02:35.710358  212817 settings.go:142] acquiring lock: {Name:mk76650c2c39eb06f447a2de408538ba39bd1323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:35.710488  212817 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 23:02:35.711562  212817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21681-2481/kubeconfig: {Name:mkf500e18704f08c85ff09b3278725656a12f4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 23:02:35.711861  212817 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1008 23:02:35.712166  212817 config.go:182] Loaded profile config "newest-cni-598445": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 23:02:35.712224  212817 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 23:02:35.712298  212817 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-598445"
	I1008 23:02:35.712316  212817 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-598445"
	W1008 23:02:35.712322  212817 addons.go:247] addon storage-provisioner should already be in state true
	I1008 23:02:35.712343  212817 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:35.712787  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.712929  212817 addons.go:69] Setting dashboard=true in profile "newest-cni-598445"
	I1008 23:02:35.712946  212817 addons.go:238] Setting addon dashboard=true in "newest-cni-598445"
	W1008 23:02:35.712978  212817 addons.go:247] addon dashboard should already be in state true
	I1008 23:02:35.713002  212817 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:35.713375  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.713674  212817 addons.go:69] Setting default-storageclass=true in profile "newest-cni-598445"
	I1008 23:02:35.713695  212817 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-598445"
	I1008 23:02:35.713960  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.718306  212817 out.go:179] * Verifying Kubernetes components...
	I1008 23:02:35.722769  212817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 23:02:35.757672  212817 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1008 23:02:35.760659  212817 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1008 23:02:35.763718  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1008 23:02:35.763740  212817 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1008 23:02:35.763811  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:35.778039  212817 addons.go:238] Setting addon default-storageclass=true in "newest-cni-598445"
	W1008 23:02:35.778061  212817 addons.go:247] addon default-storageclass should already be in state true
	I1008 23:02:35.778086  212817 host.go:66] Checking if "newest-cni-598445" exists ...
	I1008 23:02:35.778528  212817 cli_runner.go:164] Run: docker container inspect newest-cni-598445 --format={{.State.Status}}
	I1008 23:02:35.788710  212817 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1008 23:02:33.773001  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	W1008 23:02:35.776876  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	I1008 23:02:35.791570  212817 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:35.791593  212817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 23:02:35.791658  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:35.843848  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:35.851149  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:35.857897  212817 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:35.857918  212817 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 23:02:35.857982  212817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-598445
	I1008 23:02:35.886906  212817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33101 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/newest-cni-598445/id_rsa Username:docker}
	I1008 23:02:36.088088  212817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 23:02:36.124126  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1008 23:02:36.124198  212817 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1008 23:02:36.133205  212817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 23:02:36.138856  212817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 23:02:36.142971  212817 api_server.go:52] waiting for apiserver process to appear ...
	I1008 23:02:36.143095  212817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 23:02:36.181741  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1008 23:02:36.181818  212817 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1008 23:02:36.254236  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1008 23:02:36.254310  212817 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1008 23:02:36.322260  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1008 23:02:36.322280  212817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1008 23:02:36.390606  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1008 23:02:36.390626  212817 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1008 23:02:36.424721  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1008 23:02:36.424741  212817 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1008 23:02:36.444297  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1008 23:02:36.444367  212817 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1008 23:02:36.462864  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1008 23:02:36.462940  212817 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1008 23:02:36.483116  212817 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1008 23:02:36.483187  212817 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1008 23:02:36.503418  212817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1008 23:02:38.272306  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	W1008 23:02:40.272582  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	I1008 23:02:42.409536  212817 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.27625014s)
	I1008 23:02:42.409598  212817 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.270671234s)
	I1008 23:02:42.409924  212817 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.266797959s)
	I1008 23:02:42.409942  212817 api_server.go:72] duration metric: took 6.698023539s to wait for apiserver process to appear ...
	I1008 23:02:42.409948  212817 api_server.go:88] waiting for apiserver healthz status ...
	I1008 23:02:42.409961  212817 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 23:02:42.425248  212817 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1008 23:02:42.425279  212817 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1008 23:02:42.648411  212817 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.144902419s)
	I1008 23:02:42.651702  212817 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-598445 addons enable metrics-server
	
	I1008 23:02:42.654552  212817 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1008 23:02:42.657438  212817 addons.go:514] duration metric: took 6.945186001s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1008 23:02:42.910880  212817 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1008 23:02:42.922494  212817 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1008 23:02:42.923814  212817 api_server.go:141] control plane version: v1.34.1
	I1008 23:02:42.923837  212817 api_server.go:131] duration metric: took 513.880749ms to wait for apiserver health ...
	I1008 23:02:42.923846  212817 system_pods.go:43] waiting for kube-system pods to appear ...
	I1008 23:02:42.932245  212817 system_pods.go:59] 8 kube-system pods found
	I1008 23:02:42.932347  212817 system_pods.go:61] "coredns-66bc5c9577-2qjrv" [ec8d975b-2220-48dc-9c8c-65169391c742] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1008 23:02:42.932404  212817 system_pods.go:61] "etcd-newest-cni-598445" [474854ed-7e7c-49d6-9fb8-b572780f4e37] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1008 23:02:42.932432  212817 system_pods.go:61] "kindnet-26wwk" [4c47d037-c2a6-404d-82fd-1efa6e55ad21] Running
	I1008 23:02:42.932476  212817 system_pods.go:61] "kube-apiserver-newest-cni-598445" [7fc58799-0bb0-45ee-a53c-b51583ec84ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1008 23:02:42.932503  212817 system_pods.go:61] "kube-controller-manager-newest-cni-598445" [c0fdc1df-8d0b-4bb3-a383-9d9fd102b6a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1008 23:02:42.932531  212817 system_pods.go:61] "kube-proxy-qjt47" [d3bc119f-422b-4196-a3e2-c9daa5264ebc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1008 23:02:42.932568  212817 system_pods.go:61] "kube-scheduler-newest-cni-598445" [c795f706-3409-4b43-b1f8-2f3a465a03d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1008 23:02:42.932602  212817 system_pods.go:61] "storage-provisioner" [03aabe9b-e840-4770-bff2-e17a5caad244] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1008 23:02:42.932624  212817 system_pods.go:74] duration metric: took 8.770752ms to wait for pod list to return data ...
	I1008 23:02:42.932664  212817 default_sa.go:34] waiting for default service account to be created ...
	I1008 23:02:42.935803  212817 default_sa.go:45] found service account: "default"
	I1008 23:02:42.935886  212817 default_sa.go:55] duration metric: took 3.192265ms for default service account to be created ...
	I1008 23:02:42.935928  212817 kubeadm.go:586] duration metric: took 7.224006777s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 23:02:42.935992  212817 node_conditions.go:102] verifying NodePressure condition ...
	I1008 23:02:42.941003  212817 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1008 23:02:42.941103  212817 node_conditions.go:123] node cpu capacity is 2
	I1008 23:02:42.941148  212817 node_conditions.go:105] duration metric: took 5.135364ms to run NodePressure ...
	I1008 23:02:42.941208  212817 start.go:241] waiting for startup goroutines ...
	I1008 23:02:42.941231  212817 start.go:246] waiting for cluster config update ...
	I1008 23:02:42.941272  212817 start.go:255] writing updated cluster config ...
	I1008 23:02:42.941655  212817 ssh_runner.go:195] Run: rm -f paused
	I1008 23:02:43.057402  212817 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1008 23:02:43.060682  212817 out.go:179] * Done! kubectl is now configured to use "newest-cni-598445" cluster and "default" namespace by default
	W1008 23:02:42.273327  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	W1008 23:02:44.273404  207624 node_ready.go:57] node "auto-840929" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.572392994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.576196828Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fe830307-213b-4491-80a8-de0e5ba7c883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.587030984Z" level=info msg="Ran pod sandbox 7f3bbf6738026cd4c6a96a5b757ad88f132481e96e3d8c78ca352f4605d693ec with infra container: kube-system/kindnet-26wwk/POD" id=fe830307-213b-4491-80a8-de0e5ba7c883 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.588380992Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-qjt47/POD" id=fe238f8d-1c07-4c6c-9437-ec9907832ad1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.588535989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.611704925Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fe238f8d-1c07-4c6c-9437-ec9907832ad1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.624895027Z" level=info msg="Ran pod sandbox 282dd48acd26e952ffed3cdd0643c676872f36ccb78c1ebeb3bec096ff942f35 with infra container: kube-system/kube-proxy-qjt47/POD" id=fe238f8d-1c07-4c6c-9437-ec9907832ad1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.62999231Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=74401eb2-ac4e-4a2e-8738-2f7c98ef4790 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.646161865Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=efba4ffb-72e1-4df4-92c1-fc9ab5c2c619 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.658332335Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7efbbf01-0af0-4f42-b6e0-c987f69800d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.65920817Z" level=info msg="Creating container: kube-system/kindnet-26wwk/kindnet-cni" id=80b02820-9521-4714-b40c-1cb05f6fec46 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.674622647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.682724623Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9cb82be5-216b-4a5d-b654-2bc08d6be8c2 name=/runtime.v1.ImageService/ImageStatus
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.688275688Z" level=info msg="Creating container: kube-system/kube-proxy-qjt47/kube-proxy" id=59ccef1e-a6d4-4c06-b151-f85f3c80ee70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.688539643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.709236669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.713175684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.726320854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.727494408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.779900505Z" level=info msg="Created container 7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3: kube-system/kindnet-26wwk/kindnet-cni" id=80b02820-9521-4714-b40c-1cb05f6fec46 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.7929315Z" level=info msg="Starting container: 7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3" id=2bcf2e99-0888-4d0f-9a9f-f63bf69e99a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:02:41 newest-cni-598445 crio[611]: time="2025-10-08T23:02:41.801456948Z" level=info msg="Started container" PID=1055 containerID=7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3 description=kube-system/kindnet-26wwk/kindnet-cni id=2bcf2e99-0888-4d0f-9a9f-f63bf69e99a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f3bbf6738026cd4c6a96a5b757ad88f132481e96e3d8c78ca352f4605d693ec
	Oct 08 23:02:42 newest-cni-598445 crio[611]: time="2025-10-08T23:02:42.392291974Z" level=info msg="Created container 8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061: kube-system/kube-proxy-qjt47/kube-proxy" id=59ccef1e-a6d4-4c06-b151-f85f3c80ee70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 08 23:02:42 newest-cni-598445 crio[611]: time="2025-10-08T23:02:42.393159694Z" level=info msg="Starting container: 8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061" id=9eebee4e-9bff-41f4-aca4-4a437343fafd name=/runtime.v1.RuntimeService/StartContainer
	Oct 08 23:02:42 newest-cni-598445 crio[611]: time="2025-10-08T23:02:42.39691268Z" level=info msg="Started container" PID=1056 containerID=8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061 description=kube-system/kube-proxy-qjt47/kube-proxy id=9eebee4e-9bff-41f4-aca4-4a437343fafd name=/runtime.v1.RuntimeService/StartContainer sandboxID=282dd48acd26e952ffed3cdd0643c676872f36ccb78c1ebeb3bec096ff942f35
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7d0704f1922f2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   7f3bbf6738026       kindnet-26wwk                               kube-system
	8cf8c0d7516b4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   282dd48acd26e       kube-proxy-qjt47                            kube-system
	0bc68321c4680       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   cb6784edc9742       kube-controller-manager-newest-cni-598445   kube-system
	3e64781da3908       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   e0ec2562f3b7c       kube-scheduler-newest-cni-598445            kube-system
	a02593f091bf4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   7f5f04706b5ed       etcd-newest-cni-598445                      kube-system
	2e1d924927682       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   06fc62c0880c7       kube-apiserver-newest-cni-598445            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-598445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-598445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9090b00fbf2832bf29026571965024d88b63d555
	                    minikube.k8s.io/name=newest-cni-598445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_08T23_02_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Oct 2025 23:02:13 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-598445
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Oct 2025 23:02:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 08 Oct 2025 23:02:41 +0000   Wed, 08 Oct 2025 23:02:06 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-598445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f5c1dd74c18490bbcc103a1ab73ab27
	  System UUID:                fc86293c-d5bb-4314-9225-e89a6cd1ff6e
	  Boot ID:                    d82560a3-92d5-4218-b820-70e1e716d462
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-598445                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-26wwk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-598445             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-598445    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-qjt47                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-598445             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 45s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 45s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-598445 event: Registered Node newest-cni-598445 in Controller
	  Normal   Starting                 15s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-598445 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-598445 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-598445 event: Registered Node newest-cni-598445 in Controller
	
	
	==> dmesg <==
	[Oct 8 22:34] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:35] overlayfs: idmapped layers are currently not supported
	[ +39.985812] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:36] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:37] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:38] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:40] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:42] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:43] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:44] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:45] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:46] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:50] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:53] overlayfs: idmapped layers are currently not supported
	[ +34.837672] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:54] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:55] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:57] overlayfs: idmapped layers are currently not supported
	[Oct 8 22:58] overlayfs: idmapped layers are currently not supported
	[  +5.164783] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:00] overlayfs: idmapped layers are currently not supported
	[  +1.568442] overlayfs: idmapped layers are currently not supported
	[Oct 8 23:02] overlayfs: idmapped layers are currently not supported
	[  +3.214273] overlayfs: idmapped layers are currently not supported
	[ +30.544324] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a02593f091bf4e1b869b3247780e1413dad2f19604ea728bddf11702a939688d] <==
	{"level":"warn","ts":"2025-10-08T23:02:38.606323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.620500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.651341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.667133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.684557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.713088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.731888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.746447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.766241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.791579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.808676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.827760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.842904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.870298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.888793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.907358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.928196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.941501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.961100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:38.991454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.045421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.094829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.126416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.158764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-08T23:02:39.293965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60392","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:02:49 up  1:45,  0 user,  load average: 6.33, 3.64, 2.46
	Linux newest-cni-598445 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d0704f1922f2eadd7dfb36c7a7a9295e04aceb1e27b7c9bd45718448efbdac3] <==
	I1008 23:02:41.934840       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1008 23:02:41.935062       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1008 23:02:41.935156       1 main.go:148] setting mtu 1500 for CNI 
	I1008 23:02:41.935213       1 main.go:178] kindnetd IP family: "ipv4"
	I1008 23:02:41.935252       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-08T23:02:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1008 23:02:42.266565       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1008 23:02:42.267420       1 controller.go:381] "Waiting for informer caches to sync"
	I1008 23:02:42.267450       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1008 23:02:42.273880       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [2e1d9249276823a522db408755e54fa95f368a0472e8b4462337afea4b239c01] <==
	I1008 23:02:40.875709       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1008 23:02:40.875770       1 policy_source.go:240] refreshing policies
	I1008 23:02:40.905982       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 23:02:40.908622       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1008 23:02:40.925326       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1008 23:02:40.925583       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1008 23:02:40.973207       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1008 23:02:40.973786       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1008 23:02:40.973841       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1008 23:02:40.999176       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1008 23:02:40.999496       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1008 23:02:40.999611       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1008 23:02:41.066935       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1008 23:02:41.177838       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1008 23:02:41.400558       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 23:02:41.850832       1 controller.go:667] quota admission added evaluator for: namespaces
	I1008 23:02:42.025969       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1008 23:02:42.196571       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 23:02:42.246345       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 23:02:42.559027       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.38.244"}
	I1008 23:02:42.642381       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.189.41"}
	I1008 23:02:44.799920       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 23:02:45.104731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1008 23:02:45.220214       1 controller.go:667] quota admission added evaluator for: endpoints
	I1008 23:02:45.303695       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0bc68321c46808e03761c9cb44ba461f8c6e1bc97bd15431fd9e2c7996063a0a] <==
	I1008 23:02:44.752343       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1008 23:02:44.755798       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1008 23:02:44.756137       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1008 23:02:44.759543       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1008 23:02:44.760839       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1008 23:02:44.761463       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1008 23:02:44.766601       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1008 23:02:44.766807       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1008 23:02:44.768744       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1008 23:02:44.790250       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1008 23:02:44.790349       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1008 23:02:44.790467       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-598445"
	I1008 23:02:44.790520       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1008 23:02:44.790565       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1008 23:02:44.791578       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1008 23:02:44.791591       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1008 23:02:44.793364       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1008 23:02:44.793600       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1008 23:02:44.794937       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1008 23:02:44.796191       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1008 23:02:44.797780       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1008 23:02:44.800091       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1008 23:02:44.804931       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1008 23:02:44.807964       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1008 23:02:44.818865       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [8cf8c0d7516b417a501cb314c74274b81c8a3407a49a6c628c6ce6f0d4d9f061] <==
	I1008 23:02:43.082366       1 server_linux.go:53] "Using iptables proxy"
	I1008 23:02:43.794985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1008 23:02:43.896399       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1008 23:02:43.905489       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1008 23:02:43.918576       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1008 23:02:44.206011       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1008 23:02:44.206071       1 server_linux.go:132] "Using iptables Proxier"
	I1008 23:02:44.209967       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1008 23:02:44.210284       1 server.go:527] "Version info" version="v1.34.1"
	I1008 23:02:44.210308       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:02:44.211904       1 config.go:200] "Starting service config controller"
	I1008 23:02:44.211934       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1008 23:02:44.211966       1 config.go:106] "Starting endpoint slice config controller"
	I1008 23:02:44.211980       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1008 23:02:44.211992       1 config.go:403] "Starting serviceCIDR config controller"
	I1008 23:02:44.211997       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1008 23:02:44.212621       1 config.go:309] "Starting node config controller"
	I1008 23:02:44.212640       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1008 23:02:44.212647       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1008 23:02:44.312236       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1008 23:02:44.312241       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1008 23:02:44.312257       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3e64781da390812405d95dae933cfc7c3dd1126eb65abc070fb5a61a4a805bd1] <==
	I1008 23:02:38.087233       1 serving.go:386] Generated self-signed cert in-memory
	I1008 23:02:44.418128       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1008 23:02:44.418164       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 23:02:44.425821       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1008 23:02:44.425942       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1008 23:02:44.425968       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1008 23:02:44.426011       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1008 23:02:44.432177       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:02:44.437688       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:02:44.438302       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:02:44.442581       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1008 23:02:44.526801       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1008 23:02:44.539111       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1008 23:02:44.543275       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 08 23:02:38 newest-cni-598445 kubelet[730]: E1008 23:02:38.976884     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-598445\" not found" node="newest-cni-598445"
	Oct 08 23:02:39 newest-cni-598445 kubelet[730]: E1008 23:02:39.667544     730 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-598445\" not found" node="newest-cni-598445"
	Oct 08 23:02:40 newest-cni-598445 kubelet[730]: I1008 23:02:40.561280     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-598445"
	Oct 08 23:02:40 newest-cni-598445 kubelet[730]: I1008 23:02:40.945386     730 apiserver.go:52] "Watching apiserver"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.058478     730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.106878     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3bc119f-422b-4196-a3e2-c9daa5264ebc-xtables-lock\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107153     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-lib-modules\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107270     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-cni-cfg\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107359     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c47d037-c2a6-404d-82fd-1efa6e55ad21-xtables-lock\") pod \"kindnet-26wwk\" (UID: \"4c47d037-c2a6-404d-82fd-1efa6e55ad21\") " pod="kube-system/kindnet-26wwk"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.107458     730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3bc119f-422b-4196-a3e2-c9daa5264ebc-lib-modules\") pod \"kube-proxy-qjt47\" (UID: \"d3bc119f-422b-4196-a3e2-c9daa5264ebc\") " pod="kube-system/kube-proxy-qjt47"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.147288     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-598445\" already exists" pod="kube-system/etcd-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.147471     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.148213     730 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.148418     730 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.148525     730 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.150862     730 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.289195     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-598445\" already exists" pod="kube-system/kube-apiserver-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.289434     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.306875     730 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.438302     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-598445\" already exists" pod="kube-system/kube-controller-manager-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: I1008 23:02:41.438485     730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-598445"
	Oct 08 23:02:41 newest-cni-598445 kubelet[730]: E1008 23:02:41.499525     730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-598445\" already exists" pod="kube-system/kube-scheduler-newest-cni-598445"
	Oct 08 23:02:44 newest-cni-598445 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 08 23:02:44 newest-cni-598445 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 08 23:02:44 newest-cni-598445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-598445 -n newest-cni-598445
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-598445 -n newest-cni-598445: exit status 2 (388.872656ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-598445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59: exit status 1 (85.39415ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-2qjrv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-l7tgw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-8pc59" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-598445 describe pod coredns-66bc5c9577-2qjrv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l7tgw kubernetes-dashboard-855c9754f9-8pc59: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.30s)

                                                
                                    

Test pass (258/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 36.28
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 37.14
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 172.61
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.98
48 TestAddons/StoppedEnableDisable 12.2
49 TestCertOptions 33.25
50 TestCertExpiration 232.51
59 TestErrorSpam/setup 32.73
60 TestErrorSpam/start 0.77
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 7.1
63 TestErrorSpam/unpause 5.53
64 TestErrorSpam/stop 1.42
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.41
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.69
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
76 TestFunctional/serial/CacheCmd/cache/add_local 1.13
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 38.41
85 TestFunctional/serial/ComponentHealth 0.09
86 TestFunctional/serial/LogsCmd 1.54
87 TestFunctional/serial/LogsFileCmd 1.51
88 TestFunctional/serial/InvalidService 4.75
90 TestFunctional/parallel/ConfigCmd 0.54
91 TestFunctional/parallel/DashboardCmd 9.98
92 TestFunctional/parallel/DryRun 0.66
93 TestFunctional/parallel/InternationalLanguage 0.26
94 TestFunctional/parallel/StatusCmd 1.32
99 TestFunctional/parallel/AddonsCmd 0.17
100 TestFunctional/parallel/PersistentVolumeClaim 25.98
102 TestFunctional/parallel/SSHCmd 0.69
103 TestFunctional/parallel/CpCmd 2.38
105 TestFunctional/parallel/FileSync 0.34
106 TestFunctional/parallel/CertSync 2.19
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
114 TestFunctional/parallel/License 0.32
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.43
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 8.09
131 TestFunctional/parallel/MountCmd/specific-port 2
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.27
133 TestFunctional/parallel/ServiceCmd/List 0.6
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.3
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.9
145 TestFunctional/parallel/ImageCommands/Setup 0.71
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 168.69
164 TestMultiControlPlane/serial/DeployApp 6.54
165 TestMultiControlPlane/serial/PingHostFromPods 1.47
166 TestMultiControlPlane/serial/AddWorkerNode 30.39
167 TestMultiControlPlane/serial/NodeLabels 0.12
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
169 TestMultiControlPlane/serial/CopyFile 20
170 TestMultiControlPlane/serial/StopSecondaryNode 12.79
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 28.2
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.57
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 113.9
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.71
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
177 TestMultiControlPlane/serial/StopCluster 35.65
178 TestMultiControlPlane/serial/RestartCluster 70.8
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
180 TestMultiControlPlane/serial/AddSecondaryNode 78.97
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 82.85
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.69
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 49.55
211 TestKicCustomNetwork/use_default_bridge_network 37.63
212 TestKicExistingNetwork 37.88
213 TestKicCustomSubnet 39.35
214 TestKicStaticIP 36.66
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 71.02
219 TestMountStart/serial/StartWithMountFirst 6.42
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.26
222 TestMountStart/serial/VerifyMountSecond 0.3
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.22
226 TestMountStart/serial/RestartStopped 8.67
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 140.59
231 TestMultiNode/serial/DeployApp2Nodes 5.59
232 TestMultiNode/serial/PingHostFrom2Pods 0.94
233 TestMultiNode/serial/AddNode 58.86
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.4
237 TestMultiNode/serial/StopNode 2.35
238 TestMultiNode/serial/StartAfterStop 8.15
239 TestMultiNode/serial/RestartKeepsNodes 71.67
240 TestMultiNode/serial/DeleteNode 5.55
241 TestMultiNode/serial/StopMultiNode 23.78
242 TestMultiNode/serial/RestartMultiNode 48.05
243 TestMultiNode/serial/ValidateNameConflict 37.37
248 TestPreload 118.95
250 TestScheduledStopUnix 112.96
253 TestInsufficientStorage 14.27
254 TestRunningBinaryUpgrade 65.23
256 TestKubernetesUpgrade 114.14
257 TestMissingContainerUpgrade 120.81
259 TestPause/serial/Start 91.25
260 TestPause/serial/SecondStartNoReconfiguration 40.22
262 TestStoppedBinaryUpgrade/Setup 2.56
263 TestStoppedBinaryUpgrade/Upgrade 67.73
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.27
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
274 TestNoKubernetes/serial/StartWithK8s 32.28
275 TestNoKubernetes/serial/StartWithStopK8s 6.64
276 TestNoKubernetes/serial/Start 5.91
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
278 TestNoKubernetes/serial/ProfileList 1.02
279 TestNoKubernetes/serial/Stop 1.25
280 TestNoKubernetes/serial/StartNoArgs 6.76
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
289 TestNetworkPlugins/group/false 3.57
294 TestStartStop/group/old-k8s-version/serial/FirstStart 59.49
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
297 TestStartStop/group/old-k8s-version/serial/Stop 11.89
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
299 TestStartStop/group/old-k8s-version/serial/SecondStart 45.48
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
305 TestStartStop/group/no-preload/serial/FirstStart 61.55
306 TestStartStop/group/no-preload/serial/DeployApp 8.4
308 TestStartStop/group/no-preload/serial/Stop 11.84
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/no-preload/serial/SecondStart 48.49
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
316 TestStartStop/group/embed-certs/serial/FirstStart 86.83
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.57
319 TestStartStop/group/embed-certs/serial/DeployApp 10.32
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.67
323 TestStartStop/group/embed-certs/serial/Stop 12.02
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 58.28
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.97
329 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.12
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
338 TestStartStop/group/newest-cni/serial/FirstStart 48.95
339 TestNetworkPlugins/group/auto/Start 86
340 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/Stop 1.25
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
344 TestStartStop/group/newest-cni/serial/SecondStart 15.82
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
349 TestNetworkPlugins/group/kindnet/Start 85.6
350 TestNetworkPlugins/group/auto/KubeletFlags 0.43
351 TestNetworkPlugins/group/auto/NetCatPod 11.32
352 TestNetworkPlugins/group/auto/DNS 0.22
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/calico/Start 55.47
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/DNS 0.25
361 TestNetworkPlugins/group/kindnet/Localhost 0.17
362 TestNetworkPlugins/group/kindnet/HairPin 0.18
363 TestNetworkPlugins/group/calico/KubeletFlags 0.34
364 TestNetworkPlugins/group/calico/NetCatPod 11.26
365 TestNetworkPlugins/group/calico/DNS 0.23
366 TestNetworkPlugins/group/calico/Localhost 0.18
367 TestNetworkPlugins/group/calico/HairPin 0.21
368 TestNetworkPlugins/group/custom-flannel/Start 71.52
369 TestNetworkPlugins/group/enable-default-cni/Start 79.61
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
372 TestNetworkPlugins/group/custom-flannel/DNS 0.17
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.42
377 TestNetworkPlugins/group/flannel/Start 65.39
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.39
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
381 TestNetworkPlugins/group/bridge/Start 79.19
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
384 TestNetworkPlugins/group/flannel/NetCatPod 10.34
385 TestNetworkPlugins/group/flannel/DNS 0.16
386 TestNetworkPlugins/group/flannel/Localhost 0.13
387 TestNetworkPlugins/group/flannel/HairPin 0.14
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 10.25
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (36.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-117299 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-117299 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (36.276428074s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (36.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1008 21:50:56.769960    4286 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1008 21:50:56.770046    4286 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-117299
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-117299: exit status 85 (82.145811ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-117299 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-117299 │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 21:50:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 21:50:20.536023    4292 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:50:20.536234    4292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:50:20.536262    4292 out.go:374] Setting ErrFile to fd 2...
	I1008 21:50:20.536281    4292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:50:20.536578    4292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	W1008 21:50:20.536755    4292 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21681-2481/.minikube/config/config.json: open /home/jenkins/minikube-integration/21681-2481/.minikube/config/config.json: no such file or directory
	I1008 21:50:20.537242    4292 out.go:368] Setting JSON to true
	I1008 21:50:20.538093    4292 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1971,"bootTime":1759958250,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 21:50:20.538187    4292 start.go:141] virtualization:  
	I1008 21:50:20.542227    4292 out.go:99] [download-only-117299] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1008 21:50:20.542410    4292 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 21:50:20.542523    4292 notify.go:220] Checking for updates...
	I1008 21:50:20.546241    4292 out.go:171] MINIKUBE_LOCATION=21681
	I1008 21:50:20.549249    4292 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 21:50:20.552116    4292 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 21:50:20.555058    4292 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 21:50:20.557967    4292 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1008 21:50:20.563553    4292 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 21:50:20.563813    4292 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 21:50:20.586228    4292 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 21:50:20.586346    4292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:50:20.993733    4292 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-08 21:50:20.98469917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:50:20.993844    4292 docker.go:318] overlay module found
	I1008 21:50:20.996819    4292 out.go:99] Using the docker driver based on user configuration
	I1008 21:50:20.996860    4292 start.go:305] selected driver: docker
	I1008 21:50:20.996875    4292 start.go:925] validating driver "docker" against <nil>
	I1008 21:50:20.997018    4292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:50:21.057983    4292 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-08 21:50:21.048955364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:50:21.058139    4292 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 21:50:21.058431    4292 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1008 21:50:21.058594    4292 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 21:50:21.061758    4292 out.go:171] Using Docker driver with root privileges
	I1008 21:50:21.064635    4292 cni.go:84] Creating CNI manager for ""
	I1008 21:50:21.064697    4292 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:50:21.064709    4292 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 21:50:21.064784    4292 start.go:349] cluster config:
	{Name:download-only-117299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-117299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 21:50:21.067701    4292 out.go:99] Starting "download-only-117299" primary control-plane node in "download-only-117299" cluster
	I1008 21:50:21.067737    4292 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 21:50:21.070598    4292 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1008 21:50:21.070635    4292 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 21:50:21.070660    4292 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 21:50:21.086883    4292 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 21:50:21.087076    4292 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1008 21:50:21.087188    4292 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 21:50:21.123035    4292 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1008 21:50:21.123072    4292 cache.go:58] Caching tarball of preloaded images
	I1008 21:50:21.123221    4292 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1008 21:50:21.126577    4292 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1008 21:50:21.126606    4292 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1008 21:50:21.206635    4292 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1008 21:50:21.206764    4292 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1008 21:50:25.812839    4292 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	
	
	* The control-plane node download-only-117299 host does not exist
	  To start a cluster, run: "minikube start -p download-only-117299"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-117299
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (37.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-473331 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-473331 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (37.1350586s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (37.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1008 21:51:34.349861    4286 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1008 21:51:34.349897    4286 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-473331
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-473331: exit status 85 (88.42867ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-117299 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-117299 │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │ 08 Oct 25 21:50 UTC │
	│ delete  │ -p download-only-117299                                                                                                                                                   │ download-only-117299 │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │ 08 Oct 25 21:50 UTC │
	│ start   │ -o=json --download-only -p download-only-473331 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-473331 │ jenkins │ v1.37.0 │ 08 Oct 25 21:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/08 21:50:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 21:50:57.254663    4493 out.go:360] Setting OutFile to fd 1 ...
	I1008 21:50:57.254780    4493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:50:57.254791    4493 out.go:374] Setting ErrFile to fd 2...
	I1008 21:50:57.254797    4493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 21:50:57.255151    4493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 21:50:57.255631    4493 out.go:368] Setting JSON to true
	I1008 21:50:57.256371    4493 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2008,"bootTime":1759958250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 21:50:57.256918    4493 start.go:141] virtualization:  
	I1008 21:50:57.260173    4493 out.go:99] [download-only-473331] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 21:50:57.260374    4493 notify.go:220] Checking for updates...
	I1008 21:50:57.263170    4493 out.go:171] MINIKUBE_LOCATION=21681
	I1008 21:50:57.266071    4493 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 21:50:57.268912    4493 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 21:50:57.271730    4493 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 21:50:57.274644    4493 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1008 21:50:57.280390    4493 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 21:50:57.280639    4493 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 21:50:57.304759    4493 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 21:50:57.304877    4493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:50:57.368597    4493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-08 21:50:57.358995813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:50:57.368704    4493 docker.go:318] overlay module found
	I1008 21:50:57.371712    4493 out.go:99] Using the docker driver based on user configuration
	I1008 21:50:57.371762    4493 start.go:305] selected driver: docker
	I1008 21:50:57.371775    4493 start.go:925] validating driver "docker" against <nil>
	I1008 21:50:57.371893    4493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 21:50:57.427387    4493 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-08 21:50:57.418557686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 21:50:57.427555    4493 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1008 21:50:57.427839    4493 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1008 21:50:57.428001    4493 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 21:50:57.431228    4493 out.go:171] Using Docker driver with root privileges
	I1008 21:50:57.434154    4493 cni.go:84] Creating CNI manager for ""
	I1008 21:50:57.434228    4493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1008 21:50:57.434243    4493 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 21:50:57.434318    4493 start.go:349] cluster config:
	{Name:download-only-473331 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-473331 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 21:50:57.437250    4493 out.go:99] Starting "download-only-473331" primary control-plane node in "download-only-473331" cluster
	I1008 21:50:57.437287    4493 cache.go:123] Beginning downloading kic base image for docker with crio
	I1008 21:50:57.440225    4493 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1008 21:50:57.440272    4493 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:50:57.440447    4493 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1008 21:50:57.457206    4493 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1008 21:50:57.457354    4493 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1008 21:50:57.457393    4493 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1008 21:50:57.457403    4493 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1008 21:50:57.457414    4493 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1008 21:50:57.489008    4493 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1008 21:50:57.489035    4493 cache.go:58] Caching tarball of preloaded images
	I1008 21:50:57.489194    4493 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1008 21:50:57.492393    4493 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1008 21:50:57.492440    4493 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1008 21:50:57.574954    4493 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1008 21:50:57.575008    4493 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21681-2481/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-473331 host does not exist
	  To start a cluster, run: "minikube start -p download-only-473331"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-473331
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1008 21:51:35.547497    4286 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-098672 --alsologtostderr --binary-mirror http://127.0.0.1:36433 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-098672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-098672
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-961288
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-961288: exit status 85 (63.852262ms)

                                                
                                                
-- stdout --
	* Profile "addons-961288" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-961288"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-961288
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-961288: exit status 85 (84.788097ms)

                                                
                                                
-- stdout --
	* Profile "addons-961288" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-961288"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (172.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-961288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-961288 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.606425844s)
--- PASS: TestAddons/Setup (172.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-961288 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-961288 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.98s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-961288 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-961288 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ee1c1f17-8433-4de6-9ebc-dce0e18312b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ee1c1f17-8433-4de6-9ebc-dce0e18312b4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003305741s
addons_test.go:694: (dbg) Run:  kubectl --context addons-961288 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-961288 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-961288 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-961288 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-961288
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-961288: (11.913032383s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-961288
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-961288
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-961288
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (33.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-378019 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.615641945s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-378019 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-378019 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-378019 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-378019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-378019
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-378019: (1.950781362s)
--- PASS: TestCertOptions (33.25s)

                                                
                                    
x
+
TestCertExpiration (232.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-292528 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (32.544082285s)
E1008 22:46:38.631501    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:49:13.073259    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1008 22:49:29.998722    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:49:41.691077    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-292528 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.615269979s)
helpers_test.go:175: Cleaning up "cert-expiration-292528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-292528
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-292528: (2.34556477s)
--- PASS: TestCertExpiration (232.51s)

                                                
                                    
x
+
TestErrorSpam/setup (32.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-315674 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-315674 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-315674 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-315674 --driver=docker  --container-runtime=crio: (32.725698303s)
--- PASS: TestErrorSpam/setup (32.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (7.1s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause: exit status 80 (2.488126014s)

                                                
                                                
-- stdout --
	* Pausing node nospam-315674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:58:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause: exit status 80 (2.437149679s)

                                                
                                                
-- stdout --
	* Pausing node nospam-315674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:58:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause: exit status 80 (2.175308088s)

                                                
                                                
-- stdout --
	* Pausing node nospam-315674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:58:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.10s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause: exit status 80 (1.672587156s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-315674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:58:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause: exit status 80 (2.015775784s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-315674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:58:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause: exit status 80 (1.845173009s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-315674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-08T21:58:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.53s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 stop: (1.216019076s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-315674 --log_dir /tmp/nospam-315674 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21681-2481/.minikube/files/etc/test/nested/copy/4286/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101115 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1008 21:59:30.002736    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:30.009933    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:30.021317    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:30.042608    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:30.083951    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:30.165302    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:30.326666    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:30.648000    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:31.289491    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:32.570968    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:35.133877    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:40.255613    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 21:59:50.497053    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:00:10.979377    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-101115 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.411717591s)
--- PASS: TestFunctional/serial/StartWithProxy (80.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1008 22:00:13.136680    4286 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101115 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-101115 --alsologtostderr -v=8: (29.687933157s)
functional_test.go:678: soft start took 29.690235538s for "functional-101115" cluster.
I1008 22:00:42.825155    4286 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-101115 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 cache add registry.k8s.io/pause:3.1: (1.207944484s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 cache add registry.k8s.io/pause:3.3: (1.281371358s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 cache add registry.k8s.io/pause:latest: (1.09684062s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-101115 /tmp/TestFunctionalserialCacheCmdcacheadd_local2862848405/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cache add minikube-local-cache-test:functional-101115
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cache delete minikube-local-cache-test:functional-101115
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-101115
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.071578ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 kubectl -- --context functional-101115 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-101115 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101115 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1008 22:00:51.941762    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-101115 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.413803094s)
functional_test.go:776: restart took 38.413890603s for "functional-101115" cluster.
I1008 22:01:28.840445    4286 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-101115 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 logs: (1.538295025s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 logs --file /tmp/TestFunctionalserialLogsFileCmd2701205851/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 logs --file /tmp/TestFunctionalserialLogsFileCmd2701205851/001/logs.txt: (1.509102056s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-101115 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-101115
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-101115: exit status 115 (409.012123ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31301 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-101115 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-101115 delete -f testdata/invalidsvc.yaml: (1.073141s)
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 config get cpus: exit status 14 (104.515568ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 config get cpus: exit status 14 (85.336686ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-101115 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-101115 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 31078: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101115 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101115 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (289.774917ms)

                                                
                                                
-- stdout --
	* [functional-101115] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:12:08.342979   30489 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:12:08.343151   30489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:12:08.343162   30489 out.go:374] Setting ErrFile to fd 2...
	I1008 22:12:08.343168   30489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:12:08.343414   30489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:12:08.343785   30489 out.go:368] Setting JSON to false
	I1008 22:12:08.344755   30489 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3279,"bootTime":1759958250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:12:08.344828   30489 start.go:141] virtualization:  
	I1008 22:12:08.348194   30489 out.go:179] * [functional-101115] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:12:08.352020   30489 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:12:08.352065   30489 notify.go:220] Checking for updates...
	I1008 22:12:08.358167   30489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:12:08.361352   30489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:12:08.364361   30489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:12:08.367299   30489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:12:08.370451   30489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:12:08.373859   30489 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:12:08.374500   30489 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:12:08.432333   30489 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:12:08.432445   30489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:12:08.523842   30489 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:12:08.512307305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:12:08.523942   30489 docker.go:318] overlay module found
	I1008 22:12:08.527144   30489 out.go:179] * Using the docker driver based on existing profile
	I1008 22:12:08.530021   30489 start.go:305] selected driver: docker
	I1008 22:12:08.530038   30489 start.go:925] validating driver "docker" against &{Name:functional-101115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-101115 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:12:08.530141   30489 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:12:08.533588   30489 out.go:203] 
	W1008 22:12:08.536558   30489 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1008 22:12:08.539445   30489 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101115 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-101115 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101115 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (256.370397ms)

                                                
                                                
-- stdout --
	* [functional-101115] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:12:08.066206   30418 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:12:08.066373   30418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:12:08.066379   30418 out.go:374] Setting ErrFile to fd 2...
	I1008 22:12:08.066383   30418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:12:08.067766   30418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:12:08.068264   30418 out.go:368] Setting JSON to false
	I1008 22:12:08.069105   30418 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3278,"bootTime":1759958250,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:12:08.069194   30418 start.go:141] virtualization:  
	I1008 22:12:08.072738   30418 out.go:179] * [functional-101115] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1008 22:12:08.076764   30418 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:12:08.076807   30418 notify.go:220] Checking for updates...
	I1008 22:12:08.082992   30418 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:12:08.085941   30418 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:12:08.089282   30418 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:12:08.093285   30418 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:12:08.096295   30418 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:12:08.102038   30418 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:12:08.102754   30418 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:12:08.140039   30418 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:12:08.140166   30418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:12:08.231867   30418 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:12:08.217768009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:12:08.231981   30418 docker.go:318] overlay module found
	I1008 22:12:08.235177   30418 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1008 22:12:08.238032   30418 start.go:305] selected driver: docker
	I1008 22:12:08.238058   30418 start.go:925] validating driver "docker" against &{Name:functional-101115 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-101115 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 22:12:08.238169   30418 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:12:08.241955   30418 out.go:203] 
	W1008 22:12:08.245477   30418 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 22:12:08.248453   30418 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b0884d04-4b82-4a92-a494-a0c2fc833c3e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003323241s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-101115 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-101115 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-101115 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-101115 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2c721a3c-541c-4523-bb3b-e517f4670675] Pending
helpers_test.go:352: "sp-pod" [2c721a3c-541c-4523-bb3b-e517f4670675] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2c721a3c-541c-4523-bb3b-e517f4670675] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003269095s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-101115 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-101115 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-101115 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [23c592cb-adf2-49a8-8d64-8f526aa4cf78] Pending
helpers_test.go:352: "sp-pod" [23c592cb-adf2-49a8-8d64-8f526aa4cf78] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003831825s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-101115 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh -n functional-101115 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cp functional-101115:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3929051007/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh -n functional-101115 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh -n functional-101115 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4286/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo cat /etc/test/nested/copy/4286/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4286.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo cat /etc/ssl/certs/4286.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4286.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo cat /usr/share/ca-certificates/4286.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/42862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo cat /etc/ssl/certs/42862.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/42862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo cat /usr/share/ca-certificates/42862.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-101115 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh "sudo systemctl is-active docker": exit status 1 (374.518524ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh "sudo systemctl is-active containerd": exit status 1 (348.565839ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-101115 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-101115 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-101115 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26609: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-101115 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-101115 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-101115 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [aad32bc3-8995-4471-9e46-17acd2b02d3c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [aad32bc3-8995-4471-9e46-17acd2b02d3c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003695647s
I1008 22:01:49.083999    4286 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-101115 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.186.180 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-101115 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "368.963568ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.269514ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "381.32728ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.623267ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdany-port4178482731/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759961514327880167" to /tmp/TestFunctionalparallelMountCmdany-port4178482731/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759961514327880167" to /tmp/TestFunctionalparallelMountCmdany-port4178482731/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759961514327880167" to /tmp/TestFunctionalparallelMountCmdany-port4178482731/001/test-1759961514327880167
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.502246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 22:11:54.692533    4286 retry.go:31] will retry after 716.479806ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  8 22:11 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  8 22:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  8 22:11 test-1759961514327880167
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh cat /mount-9p/test-1759961514327880167
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-101115 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [f1a68ed5-c09a-4359-ad25-a9776051e8d3] Pending
helpers_test.go:352: "busybox-mount" [f1a68ed5-c09a-4359-ad25-a9776051e8d3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [f1a68ed5-c09a-4359-ad25-a9776051e8d3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [f1a68ed5-c09a-4359-ad25-a9776051e8d3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003303845s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-101115 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdany-port4178482731/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdspecific-port3746645389/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.056865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 22:12:02.796827    4286 retry.go:31] will retry after 568.496139ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdspecific-port3746645389/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh "sudo umount -f /mount-9p": exit status 1 (290.035988ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-101115 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdspecific-port3746645389/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557220909/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557220909/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557220909/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T" /mount1: exit status 1 (631.865484ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1008 22:12:05.052154    4286 retry.go:31] will retry after 436.207827ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-101115 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557220909/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557220909/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-101115 /tmp/TestFunctionalparallelMountCmdVerifyCleanup557220909/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 service list -o json
functional_test.go:1504: Took "634.243302ms" to run "out/minikube-linux-arm64 -p functional-101115 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 version -o=json --components: (1.301970252s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101115 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101115 image ls --format short --alsologtostderr:
I1008 22:12:22.220119   32806 out.go:360] Setting OutFile to fd 1 ...
I1008 22:12:22.220229   32806 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:22.220240   32806 out.go:374] Setting ErrFile to fd 2...
I1008 22:12:22.220246   32806 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:22.220511   32806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
I1008 22:12:22.221398   32806 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:22.221551   32806 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:22.222047   32806 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
I1008 22:12:22.242314   32806 ssh_runner.go:195] Run: systemctl --version
I1008 22:12:22.242626   32806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
I1008 22:12:22.260311   32806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
I1008 22:12:22.364527   32806 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101115 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ d8e54d0a33288 │ 60.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101115 image ls --format table --alsologtostderr:
I1008 22:12:23.147976   33011 out.go:360] Setting OutFile to fd 1 ...
I1008 22:12:23.148084   33011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:23.148089   33011 out.go:374] Setting ErrFile to fd 2...
I1008 22:12:23.148093   33011 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:23.148336   33011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
I1008 22:12:23.148903   33011 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:23.149030   33011 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:23.149487   33011 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
I1008 22:12:23.171469   33011 ssh_runner.go:195] Run: systemctl --version
I1008 22:12:23.171527   33011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
I1008 22:12:23.189845   33011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
I1008 22:12:23.303308   33011 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101115 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"d8e54d0a332887a610df62ea5c0b16fa4a1c2b7e4415dc5ac0dcfc6fa588cb70","repoDigests":["docker.io/library/nginx@sha256:52175fc0394e97029664721dfdb76a8af1e3045532ab5fb2249e555d50f347bc","docker.io/library/nginx@sha256:9388e9644d1118a705af691f800b926c4683665f1f748234e1289add5f5a95cd"],"repoTags"
:["docker.io/library/nginx:alpine"],"size":"60537870"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5
b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a7
47ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"siz
e":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"4391
1e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101115 image ls --format json --alsologtostderr:
I1008 22:12:22.864211   32943 out.go:360] Setting OutFile to fd 1 ...
I1008 22:12:22.864620   32943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:22.864661   32943 out.go:374] Setting ErrFile to fd 2...
I1008 22:12:22.864681   32943 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:22.864996   32943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
I1008 22:12:22.865821   32943 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:22.866041   32943 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:22.866602   32943 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
I1008 22:12:22.889826   32943 ssh_runner.go:195] Run: systemctl --version
I1008 22:12:22.889881   32943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
I1008 22:12:22.914455   32943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
I1008 22:12:23.021153   32943 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101115 image ls --format yaml --alsologtostderr:
- id: d8e54d0a332887a610df62ea5c0b16fa4a1c2b7e4415dc5ac0dcfc6fa588cb70
repoDigests:
- docker.io/library/nginx@sha256:52175fc0394e97029664721dfdb76a8af1e3045532ab5fb2249e555d50f347bc
- docker.io/library/nginx@sha256:9388e9644d1118a705af691f800b926c4683665f1f748234e1289add5f5a95cd
repoTags:
- docker.io/library/nginx:alpine
size: "60537870"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101115 image ls --format yaml --alsologtostderr:
I1008 22:12:22.454144   32846 out.go:360] Setting OutFile to fd 1 ...
I1008 22:12:22.454337   32846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:22.454350   32846 out.go:374] Setting ErrFile to fd 2...
I1008 22:12:22.454354   32846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:22.454614   32846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
I1008 22:12:22.455224   32846 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:22.455342   32846 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:22.455806   32846 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
I1008 22:12:22.482689   32846 ssh_runner.go:195] Run: systemctl --version
I1008 22:12:22.482747   32846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
I1008 22:12:22.506105   32846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
I1008 22:12:22.616246   32846 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-101115 ssh pgrep buildkitd: exit status 1 (334.585032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image build -t localhost/my-image:functional-101115 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-101115 image build -t localhost/my-image:functional-101115 testdata/build --alsologtostderr: (3.333942131s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-101115 image build -t localhost/my-image:functional-101115 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 13893d387fa
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-101115
--> 359d8d0b05b
Successfully tagged localhost/my-image:functional-101115
359d8d0b05bb5b9903b338e3e67bd7774611133c5d3ea1fb03143ace7ddb6cbf
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-101115 image build -t localhost/my-image:functional-101115 testdata/build --alsologtostderr:
I1008 22:12:23.052214   32991 out.go:360] Setting OutFile to fd 1 ...
I1008 22:12:23.055152   32991 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:23.055190   32991 out.go:374] Setting ErrFile to fd 2...
I1008 22:12:23.055211   32991 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1008 22:12:23.055559   32991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
I1008 22:12:23.056251   32991 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:23.057465   32991 config.go:182] Loaded profile config "functional-101115": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1008 22:12:23.057984   32991 cli_runner.go:164] Run: docker container inspect functional-101115 --format={{.State.Status}}
I1008 22:12:23.085257   32991 ssh_runner.go:195] Run: systemctl --version
I1008 22:12:23.085306   32991 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101115
I1008 22:12:23.106967   32991 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/functional-101115/id_rsa Username:docker}
I1008 22:12:23.220273   32991 build_images.go:161] Building image from path: /tmp/build.3115377655.tar
I1008 22:12:23.220334   32991 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1008 22:12:23.232366   32991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3115377655.tar
I1008 22:12:23.237148   32991 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3115377655.tar: stat -c "%s %y" /var/lib/minikube/build/build.3115377655.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3115377655.tar': No such file or directory
I1008 22:12:23.237176   32991 ssh_runner.go:362] scp /tmp/build.3115377655.tar --> /var/lib/minikube/build/build.3115377655.tar (3072 bytes)
I1008 22:12:23.257386   32991 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3115377655
I1008 22:12:23.266251   32991 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3115377655 -xf /var/lib/minikube/build/build.3115377655.tar
I1008 22:12:23.275323   32991 crio.go:315] Building image: /var/lib/minikube/build/build.3115377655
I1008 22:12:23.275396   32991 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-101115 /var/lib/minikube/build/build.3115377655 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1008 22:12:26.293838   32991 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-101115 /var/lib/minikube/build/build.3115377655 --cgroup-manager=cgroupfs: (3.018421311s)
I1008 22:12:26.293903   32991 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3115377655
I1008 22:12:26.303113   32991 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3115377655.tar
I1008 22:12:26.310942   32991 build_images.go:217] Built localhost/my-image:functional-101115 from /tmp/build.3115377655.tar
I1008 22:12:26.310976   32991 build_images.go:133] succeeded building to: functional-101115
I1008 22:12:26.310981   32991 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-101115
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image rm kicbase/echo-server:functional-101115 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-101115 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-101115
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-101115
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-101115
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (168.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1008 22:14:29.998657    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m47.78641463s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (168.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 kubectl -- rollout status deployment/busybox: (3.848792655s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-2k6j4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-dlh5m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-g8q7l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-2k6j4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-dlh5m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-g8q7l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-2k6j4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-dlh5m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-g8q7l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-2k6j4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-2k6j4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-dlh5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-dlh5m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-g8q7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 kubectl -- exec busybox-7b57f96db7-g8q7l -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 node add --alsologtostderr -v 5
E1008 22:15:53.067768    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 node add --alsologtostderr -v 5: (29.318782262s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5: (1.07264395s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-315086 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.046737266s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 status --output json --alsologtostderr -v 5: (1.062189889s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp testdata/cp-test.txt ha-315086:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile297731462/001/cp-test_ha-315086.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086:/home/docker/cp-test.txt ha-315086-m02:/home/docker/cp-test_ha-315086_ha-315086-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test_ha-315086_ha-315086-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086:/home/docker/cp-test.txt ha-315086-m03:/home/docker/cp-test_ha-315086_ha-315086-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test_ha-315086_ha-315086-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086:/home/docker/cp-test.txt ha-315086-m04:/home/docker/cp-test_ha-315086_ha-315086-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test_ha-315086_ha-315086-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp testdata/cp-test.txt ha-315086-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile297731462/001/cp-test_ha-315086-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m02:/home/docker/cp-test.txt ha-315086:/home/docker/cp-test_ha-315086-m02_ha-315086.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test_ha-315086-m02_ha-315086.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m02:/home/docker/cp-test.txt ha-315086-m03:/home/docker/cp-test_ha-315086-m02_ha-315086-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test_ha-315086-m02_ha-315086-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m02:/home/docker/cp-test.txt ha-315086-m04:/home/docker/cp-test_ha-315086-m02_ha-315086-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test_ha-315086-m02_ha-315086-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp testdata/cp-test.txt ha-315086-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile297731462/001/cp-test_ha-315086-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m03:/home/docker/cp-test.txt ha-315086:/home/docker/cp-test_ha-315086-m03_ha-315086.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test_ha-315086-m03_ha-315086.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m03:/home/docker/cp-test.txt ha-315086-m02:/home/docker/cp-test_ha-315086-m03_ha-315086-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test_ha-315086-m03_ha-315086-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m03:/home/docker/cp-test.txt ha-315086-m04:/home/docker/cp-test_ha-315086-m03_ha-315086-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test_ha-315086-m03_ha-315086-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp testdata/cp-test.txt ha-315086-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile297731462/001/cp-test_ha-315086-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m04:/home/docker/cp-test.txt ha-315086:/home/docker/cp-test_ha-315086-m04_ha-315086.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086 "sudo cat /home/docker/cp-test_ha-315086-m04_ha-315086.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m04:/home/docker/cp-test.txt ha-315086-m02:/home/docker/cp-test_ha-315086-m04_ha-315086-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m02 "sudo cat /home/docker/cp-test_ha-315086-m04_ha-315086-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 cp ha-315086-m04:/home/docker/cp-test.txt ha-315086-m03:/home/docker/cp-test_ha-315086-m04_ha-315086-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 ssh -n ha-315086-m03 "sudo cat /home/docker/cp-test_ha-315086-m04_ha-315086-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 node stop m02 --alsologtostderr -v 5: (11.961310285s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5: exit status 7 (828.940649ms)

                                                
                                                
-- stdout --
	ha-315086
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315086-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-315086-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-315086-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:16:29.497571   47755 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:16:29.497728   47755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:16:29.497734   47755 out.go:374] Setting ErrFile to fd 2...
	I1008 22:16:29.497739   47755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:16:29.498001   47755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:16:29.498179   47755 out.go:368] Setting JSON to false
	I1008 22:16:29.498204   47755 mustload.go:65] Loading cluster: ha-315086
	I1008 22:16:29.498374   47755 notify.go:220] Checking for updates...
	I1008 22:16:29.498615   47755 config.go:182] Loaded profile config "ha-315086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:16:29.498634   47755 status.go:174] checking status of ha-315086 ...
	I1008 22:16:29.499424   47755 cli_runner.go:164] Run: docker container inspect ha-315086 --format={{.State.Status}}
	I1008 22:16:29.520977   47755 status.go:371] ha-315086 host status = "Running" (err=<nil>)
	I1008 22:16:29.521001   47755 host.go:66] Checking if "ha-315086" exists ...
	I1008 22:16:29.521310   47755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-315086
	I1008 22:16:29.541810   47755 host.go:66] Checking if "ha-315086" exists ...
	I1008 22:16:29.542120   47755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:16:29.542171   47755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-315086
	I1008 22:16:29.564562   47755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/ha-315086/id_rsa Username:docker}
	I1008 22:16:29.667792   47755 ssh_runner.go:195] Run: systemctl --version
	I1008 22:16:29.674770   47755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:16:29.688811   47755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:16:29.778618   47755 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-08 22:16:29.767454886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:16:29.779553   47755 kubeconfig.go:125] found "ha-315086" server: "https://192.168.49.254:8443"
	I1008 22:16:29.779605   47755 api_server.go:166] Checking apiserver status ...
	I1008 22:16:29.779689   47755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:16:29.793366   47755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1231/cgroup
	I1008 22:16:29.802228   47755 api_server.go:182] apiserver freezer: "3:freezer:/docker/767a5158efc9043c0d28e3a0b3b35f06f51a2ce675469a1172c0882c81c92894/crio/crio-522f1c3784884d27f5bf121f07524930161656225fe5ca6afcd2208cd9116b31"
	I1008 22:16:29.802309   47755 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/767a5158efc9043c0d28e3a0b3b35f06f51a2ce675469a1172c0882c81c92894/crio/crio-522f1c3784884d27f5bf121f07524930161656225fe5ca6afcd2208cd9116b31/freezer.state
	I1008 22:16:29.810342   47755 api_server.go:204] freezer state: "THAWED"
	I1008 22:16:29.810372   47755 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1008 22:16:29.818762   47755 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1008 22:16:29.818791   47755 status.go:463] ha-315086 apiserver status = Running (err=<nil>)
	I1008 22:16:29.818802   47755 status.go:176] ha-315086 status: &{Name:ha-315086 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:16:29.818820   47755 status.go:174] checking status of ha-315086-m02 ...
	I1008 22:16:29.819128   47755 cli_runner.go:164] Run: docker container inspect ha-315086-m02 --format={{.State.Status}}
	I1008 22:16:29.837394   47755 status.go:371] ha-315086-m02 host status = "Stopped" (err=<nil>)
	I1008 22:16:29.837418   47755 status.go:384] host is not running, skipping remaining checks
	I1008 22:16:29.837424   47755 status.go:176] ha-315086-m02 status: &{Name:ha-315086-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:16:29.837445   47755 status.go:174] checking status of ha-315086-m03 ...
	I1008 22:16:29.837799   47755 cli_runner.go:164] Run: docker container inspect ha-315086-m03 --format={{.State.Status}}
	I1008 22:16:29.855216   47755 status.go:371] ha-315086-m03 host status = "Running" (err=<nil>)
	I1008 22:16:29.855242   47755 host.go:66] Checking if "ha-315086-m03" exists ...
	I1008 22:16:29.855560   47755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-315086-m03
	I1008 22:16:29.873078   47755 host.go:66] Checking if "ha-315086-m03" exists ...
	I1008 22:16:29.873402   47755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:16:29.873447   47755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-315086-m03
	I1008 22:16:29.890716   47755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/ha-315086-m03/id_rsa Username:docker}
	I1008 22:16:29.991341   47755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:16:30.016585   47755 kubeconfig.go:125] found "ha-315086" server: "https://192.168.49.254:8443"
	I1008 22:16:30.016630   47755 api_server.go:166] Checking apiserver status ...
	I1008 22:16:30.016676   47755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:16:30.038716   47755 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	I1008 22:16:30.050667   47755 api_server.go:182] apiserver freezer: "3:freezer:/docker/373ca9ce6059ddd27fef85e77fb4d087f19fca372868ef0fe861dc8c94864dab/crio/crio-418a207278fa441a3f8a1a060b12b45098c009027edd11a545c53fb514c960a3"
	I1008 22:16:30.050790   47755 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/373ca9ce6059ddd27fef85e77fb4d087f19fca372868ef0fe861dc8c94864dab/crio/crio-418a207278fa441a3f8a1a060b12b45098c009027edd11a545c53fb514c960a3/freezer.state
	I1008 22:16:30.063488   47755 api_server.go:204] freezer state: "THAWED"
	I1008 22:16:30.063520   47755 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1008 22:16:30.072371   47755 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1008 22:16:30.072410   47755 status.go:463] ha-315086-m03 apiserver status = Running (err=<nil>)
	I1008 22:16:30.072447   47755 status.go:176] ha-315086-m03 status: &{Name:ha-315086-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:16:30.072474   47755 status.go:174] checking status of ha-315086-m04 ...
	I1008 22:16:30.072815   47755 cli_runner.go:164] Run: docker container inspect ha-315086-m04 --format={{.State.Status}}
	I1008 22:16:30.093908   47755 status.go:371] ha-315086-m04 host status = "Running" (err=<nil>)
	I1008 22:16:30.093935   47755 host.go:66] Checking if "ha-315086-m04" exists ...
	I1008 22:16:30.094279   47755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-315086-m04
	I1008 22:16:30.113584   47755 host.go:66] Checking if "ha-315086-m04" exists ...
	I1008 22:16:30.114029   47755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:16:30.114081   47755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-315086-m04
	I1008 22:16:30.134430   47755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/ha-315086-m04/id_rsa Username:docker}
	I1008 22:16:30.243367   47755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:16:30.257869   47755 status.go:176] ha-315086-m04 status: &{Name:ha-315086-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 node start m02 --alsologtostderr -v 5
E1008 22:16:38.627535    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:38.633806    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:38.645145    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:38.666436    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:38.707744    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:38.789087    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:38.950383    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:39.271653    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:39.913562    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:41.195178    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:43.757242    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:16:48.878840    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 node start m02 --alsologtostderr -v 5: (26.892173994s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
E1008 22:16:59.120784    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5: (1.202999513s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.565547607s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (113.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 stop --alsologtostderr -v 5
E1008 22:17:19.602112    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 stop --alsologtostderr -v 5: (26.47003084s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 start --wait true --alsologtostderr -v 5
E1008 22:18:00.563612    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 start --wait true --alsologtostderr -v 5: (1m27.226854967s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (113.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 node delete m03 --alsologtostderr -v 5: (10.781168702s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 stop --alsologtostderr -v 5
E1008 22:19:22.486150    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:19:29.998892    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 stop --alsologtostderr -v 5: (35.539060904s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5: exit status 7 (107.815273ms)

                                                
                                                
-- stdout --
	ha-315086
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-315086-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-315086-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:19:42.794054   58982 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:19:42.794180   58982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:19:42.794189   58982 out.go:374] Setting ErrFile to fd 2...
	I1008 22:19:42.794194   58982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:19:42.794492   58982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:19:42.794692   58982 out.go:368] Setting JSON to false
	I1008 22:19:42.794734   58982 mustload.go:65] Loading cluster: ha-315086
	I1008 22:19:42.794835   58982 notify.go:220] Checking for updates...
	I1008 22:19:42.795147   58982 config.go:182] Loaded profile config "ha-315086": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:19:42.795158   58982 status.go:174] checking status of ha-315086 ...
	I1008 22:19:42.796016   58982 cli_runner.go:164] Run: docker container inspect ha-315086 --format={{.State.Status}}
	I1008 22:19:42.813082   58982 status.go:371] ha-315086 host status = "Stopped" (err=<nil>)
	I1008 22:19:42.813120   58982 status.go:384] host is not running, skipping remaining checks
	I1008 22:19:42.813127   58982 status.go:176] ha-315086 status: &{Name:ha-315086 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:19:42.813156   58982 status.go:174] checking status of ha-315086-m02 ...
	I1008 22:19:42.813462   58982 cli_runner.go:164] Run: docker container inspect ha-315086-m02 --format={{.State.Status}}
	I1008 22:19:42.835300   58982 status.go:371] ha-315086-m02 host status = "Stopped" (err=<nil>)
	I1008 22:19:42.835326   58982 status.go:384] host is not running, skipping remaining checks
	I1008 22:19:42.835334   58982 status.go:176] ha-315086-m02 status: &{Name:ha-315086-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:19:42.835355   58982 status.go:174] checking status of ha-315086-m04 ...
	I1008 22:19:42.835642   58982 cli_runner.go:164] Run: docker container inspect ha-315086-m04 --format={{.State.Status}}
	I1008 22:19:42.854559   58982 status.go:371] ha-315086-m04 host status = "Stopped" (err=<nil>)
	I1008 22:19:42.854630   58982 status.go:384] host is not running, skipping remaining checks
	I1008 22:19:42.854645   58982 status.go:176] ha-315086-m04 status: &{Name:ha-315086-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (70.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m9.790546398s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (70.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 node add --control-plane --alsologtostderr -v 5
E1008 22:21:38.627161    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:22:06.328330    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 node add --control-plane --alsologtostderr -v 5: (1m17.843334772s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-315086 status --alsologtostderr -v 5: (1.129879734s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.07671943s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-881367 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-881367 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.839794365s)
--- PASS: TestJSONOutput/start/Command (82.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-881367 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-881367 --output=json --user=testUser: (5.688994546s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-670035 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-670035 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (89.785449ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6854a684-6e83-4040-a3e0-dd0fc31aa988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-670035] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d159a367-67c6-4772-ad7f-fcc8738ad57a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21681"}}
	{"specversion":"1.0","id":"c711852a-568d-4266-baa0-3c96689c32fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ac9e6605-7a5c-4db7-aa4a-729df6cb3ede","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig"}}
	{"specversion":"1.0","id":"b3f6d5b2-7f94-4c9b-935c-2ac7048374a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube"}}
	{"specversion":"1.0","id":"1cbb804e-895a-465c-b5e9-c4087bb01305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6ca94159-024a-473c-9806-785c04c2523d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8972635e-edb2-4606-9a42-5b6a2b4ad3ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-670035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-670035
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (49.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-546096 --network=
E1008 22:24:29.998812    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-546096 --network=: (47.351067806s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-546096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-546096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-546096: (2.168184732s)
--- PASS: TestKicCustomNetwork/create_custom_network (49.55s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-242962 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-242962 --network=bridge: (35.611374899s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-242962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-242962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-242962: (1.997294348s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.63s)

                                                
                                    
x
+
TestKicExistingNetwork (37.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1008 22:25:28.711596    4286 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1008 22:25:28.727365    4286 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1008 22:25:28.727439    4286 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1008 22:25:28.727456    4286 cli_runner.go:164] Run: docker network inspect existing-network
W1008 22:25:28.744290    4286 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1008 22:25:28.744317    4286 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1008 22:25:28.744333    4286 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1008 22:25:28.744444    4286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1008 22:25:28.761019    4286 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c46765bca8fb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:f9:7c:ba:7b:ab} reservation:<nil>}
I1008 22:25:28.765101    4286 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1008 22:25:28.765536    4286 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001acc9d0}
I1008 22:25:28.766088    4286 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1008 22:25:28.766171    4286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1008 22:25:28.825869    4286 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-895272 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-895272 --network=existing-network: (35.689755882s)
helpers_test.go:175: Cleaning up "existing-network-895272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-895272
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-895272: (2.040174203s)
I1008 22:26:06.571660    4286 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.88s)

                                                
                                    
x
+
TestKicCustomSubnet (39.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-341441 --subnet=192.168.60.0/24
E1008 22:26:38.632646    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-341441 --subnet=192.168.60.0/24: (37.100764033s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-341441 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-341441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-341441
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-341441: (2.218389265s)
--- PASS: TestKicCustomSubnet (39.35s)

                                                
                                    
x
+
TestKicStaticIP (36.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-089435 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-089435 --static-ip=192.168.200.200: (34.351689303s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-089435 ip
helpers_test.go:175: Cleaning up "static-ip-089435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-089435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-089435: (2.152797159s)
--- PASS: TestKicStaticIP (36.66s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-575892 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-575892 --driver=docker  --container-runtime=crio: (30.314029802s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-578534 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-578534 --driver=docker  --container-runtime=crio: (35.268737062s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-575892
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-578534
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-578534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-578534
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-578534: (2.024892927s)
helpers_test.go:175: Cleaning up "first-575892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-575892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-575892: (1.948799239s)
--- PASS: TestMinikubeProfile (71.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-316023 --memory=3072 --mount-string /tmp/TestMountStartserial2724701638/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-316023 --memory=3072 --mount-string /tmp/TestMountStartserial2724701638/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.414179912s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-316023 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-318591 --memory=3072 --mount-string /tmp/TestMountStartserial2724701638/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-318591 --memory=3072 --mount-string /tmp/TestMountStartserial2724701638/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.258039853s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-318591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-316023 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-316023 --alsologtostderr -v=5: (1.655061747s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-318591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-318591
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-318591: (1.224068936s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-318591
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-318591: (7.667467699s)
--- PASS: TestMountStart/serial/RestartStopped (8.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-318591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-674749 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1008 22:29:29.998750    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-674749 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.059115559s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-674749 -- rollout status deployment/busybox: (3.759847441s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-sqmq7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-txvc9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-sqmq7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-txvc9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-sqmq7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-txvc9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-sqmq7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-sqmq7 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-txvc9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-674749 -- exec busybox-7b57f96db7-txvc9 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-674749 -v=5 --alsologtostderr
E1008 22:31:38.627112    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-674749 -v=5 --alsologtostderr: (58.180612975s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-674749 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp testdata/cp-test.txt multinode-674749:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile966633062/001/cp-test_multinode-674749.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749:/home/docker/cp-test.txt multinode-674749-m02:/home/docker/cp-test_multinode-674749_multinode-674749-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749 "sudo cat /home/docker/cp-test.txt"
E1008 22:32:33.070048    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m02 "sudo cat /home/docker/cp-test_multinode-674749_multinode-674749-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749:/home/docker/cp-test.txt multinode-674749-m03:/home/docker/cp-test_multinode-674749_multinode-674749-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m03 "sudo cat /home/docker/cp-test_multinode-674749_multinode-674749-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp testdata/cp-test.txt multinode-674749-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile966633062/001/cp-test_multinode-674749-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749-m02:/home/docker/cp-test.txt multinode-674749:/home/docker/cp-test_multinode-674749-m02_multinode-674749.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749 "sudo cat /home/docker/cp-test_multinode-674749-m02_multinode-674749.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749-m02:/home/docker/cp-test.txt multinode-674749-m03:/home/docker/cp-test_multinode-674749-m02_multinode-674749-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m03 "sudo cat /home/docker/cp-test_multinode-674749-m02_multinode-674749-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp testdata/cp-test.txt multinode-674749-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile966633062/001/cp-test_multinode-674749-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749-m03:/home/docker/cp-test.txt multinode-674749:/home/docker/cp-test_multinode-674749-m03_multinode-674749.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749 "sudo cat /home/docker/cp-test_multinode-674749-m03_multinode-674749.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 cp multinode-674749-m03:/home/docker/cp-test.txt multinode-674749-m02:/home/docker/cp-test_multinode-674749-m03_multinode-674749-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 ssh -n multinode-674749-m02 "sudo cat /home/docker/cp-test_multinode-674749-m03_multinode-674749-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-674749 node stop m03: (1.23955002s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-674749 status: exit status 7 (559.479037ms)

                                                
                                                
-- stdout --
	multinode-674749
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-674749-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-674749-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr: exit status 7 (547.625777ms)

                                                
                                                
-- stdout --
	multinode-674749
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-674749-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-674749-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:32:42.838047  109396 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:32:42.838228  109396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:32:42.838259  109396 out.go:374] Setting ErrFile to fd 2...
	I1008 22:32:42.838283  109396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:32:42.838557  109396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:32:42.838781  109396 out.go:368] Setting JSON to false
	I1008 22:32:42.838860  109396 mustload.go:65] Loading cluster: multinode-674749
	I1008 22:32:42.838932  109396 notify.go:220] Checking for updates...
	I1008 22:32:42.840018  109396 config.go:182] Loaded profile config "multinode-674749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:32:42.840098  109396 status.go:174] checking status of multinode-674749 ...
	I1008 22:32:42.840973  109396 cli_runner.go:164] Run: docker container inspect multinode-674749 --format={{.State.Status}}
	I1008 22:32:42.861727  109396 status.go:371] multinode-674749 host status = "Running" (err=<nil>)
	I1008 22:32:42.861749  109396 host.go:66] Checking if "multinode-674749" exists ...
	I1008 22:32:42.862136  109396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-674749
	I1008 22:32:42.887402  109396 host.go:66] Checking if "multinode-674749" exists ...
	I1008 22:32:42.887758  109396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:32:42.887818  109396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-674749
	I1008 22:32:42.912157  109396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32906 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/multinode-674749/id_rsa Username:docker}
	I1008 22:32:43.019711  109396 ssh_runner.go:195] Run: systemctl --version
	I1008 22:32:43.027771  109396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:32:43.042293  109396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:32:43.105111  109396 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-08 22:32:43.094947854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:32:43.105772  109396 kubeconfig.go:125] found "multinode-674749" server: "https://192.168.58.2:8443"
	I1008 22:32:43.105809  109396 api_server.go:166] Checking apiserver status ...
	I1008 22:32:43.105854  109396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 22:32:43.118183  109396 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1008 22:32:43.126736  109396 api_server.go:182] apiserver freezer: "3:freezer:/docker/fe86eed3d0e58799e09379bf24c10f32cdfd2be4f09f15a11fc7251169bd41f9/crio/crio-0a3ed3473894703051bc94a29c774f9a20bbb47215389f1c9d0d714570a4dbea"
	I1008 22:32:43.126809  109396 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fe86eed3d0e58799e09379bf24c10f32cdfd2be4f09f15a11fc7251169bd41f9/crio/crio-0a3ed3473894703051bc94a29c774f9a20bbb47215389f1c9d0d714570a4dbea/freezer.state
	I1008 22:32:43.134533  109396 api_server.go:204] freezer state: "THAWED"
	I1008 22:32:43.134567  109396 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1008 22:32:43.142912  109396 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1008 22:32:43.142941  109396 status.go:463] multinode-674749 apiserver status = Running (err=<nil>)
	I1008 22:32:43.142980  109396 status.go:176] multinode-674749 status: &{Name:multinode-674749 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:32:43.143006  109396 status.go:174] checking status of multinode-674749-m02 ...
	I1008 22:32:43.143361  109396 cli_runner.go:164] Run: docker container inspect multinode-674749-m02 --format={{.State.Status}}
	I1008 22:32:43.161330  109396 status.go:371] multinode-674749-m02 host status = "Running" (err=<nil>)
	I1008 22:32:43.161356  109396 host.go:66] Checking if "multinode-674749-m02" exists ...
	I1008 22:32:43.161682  109396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-674749-m02
	I1008 22:32:43.179004  109396 host.go:66] Checking if "multinode-674749-m02" exists ...
	I1008 22:32:43.179329  109396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1008 22:32:43.179378  109396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-674749-m02
	I1008 22:32:43.202966  109396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32911 SSHKeyPath:/home/jenkins/minikube-integration/21681-2481/.minikube/machines/multinode-674749-m02/id_rsa Username:docker}
	I1008 22:32:43.303035  109396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 22:32:43.315786  109396 status.go:176] multinode-674749-m02 status: &{Name:multinode-674749-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:32:43.315820  109396 status.go:174] checking status of multinode-674749-m03 ...
	I1008 22:32:43.316132  109396 cli_runner.go:164] Run: docker container inspect multinode-674749-m03 --format={{.State.Status}}
	I1008 22:32:43.333726  109396 status.go:371] multinode-674749-m03 host status = "Stopped" (err=<nil>)
	I1008 22:32:43.333750  109396 status.go:384] host is not running, skipping remaining checks
	I1008 22:32:43.333757  109396 status.go:176] multinode-674749-m03 status: &{Name:multinode-674749-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-674749 node start m03 -v=5 --alsologtostderr: (7.342355568s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (71.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-674749
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-674749
E1008 22:33:01.689680    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-674749: (24.725987896s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-674749 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-674749 --wait=true -v=5 --alsologtostderr: (46.824767537s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-674749
--- PASS: TestMultiNode/serial/RestartKeepsNodes (71.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-674749 node delete m03: (4.862525312s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 stop
E1008 22:34:29.998535    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-674749 stop: (23.599983058s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-674749 status: exit status 7 (87.764886ms)

                                                
                                                
-- stdout --
	multinode-674749
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-674749-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr: exit status 7 (87.572071ms)

                                                
                                                
-- stdout --
	multinode-674749
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-674749-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:34:32.447112  117134 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:34:32.447396  117134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:34:32.447413  117134 out.go:374] Setting ErrFile to fd 2...
	I1008 22:34:32.447419  117134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:34:32.447983  117134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:34:32.448227  117134 out.go:368] Setting JSON to false
	I1008 22:34:32.448279  117134 mustload.go:65] Loading cluster: multinode-674749
	I1008 22:34:32.448371  117134 notify.go:220] Checking for updates...
	I1008 22:34:32.448733  117134 config.go:182] Loaded profile config "multinode-674749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:34:32.448754  117134 status.go:174] checking status of multinode-674749 ...
	I1008 22:34:32.449366  117134 cli_runner.go:164] Run: docker container inspect multinode-674749 --format={{.State.Status}}
	I1008 22:34:32.468735  117134 status.go:371] multinode-674749 host status = "Stopped" (err=<nil>)
	I1008 22:34:32.468758  117134 status.go:384] host is not running, skipping remaining checks
	I1008 22:34:32.468765  117134 status.go:176] multinode-674749 status: &{Name:multinode-674749 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1008 22:34:32.468797  117134 status.go:174] checking status of multinode-674749-m02 ...
	I1008 22:34:32.469103  117134 cli_runner.go:164] Run: docker container inspect multinode-674749-m02 --format={{.State.Status}}
	I1008 22:34:32.486091  117134 status.go:371] multinode-674749-m02 host status = "Stopped" (err=<nil>)
	I1008 22:34:32.486111  117134 status.go:384] host is not running, skipping remaining checks
	I1008 22:34:32.486121  117134 status.go:176] multinode-674749-m02 status: &{Name:multinode-674749-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-674749 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-674749 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.346645588s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-674749 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-674749
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-674749-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-674749-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.982098ms)

                                                
                                                
-- stdout --
	* [multinode-674749-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-674749-m02' is duplicated with machine name 'multinode-674749-m02' in profile 'multinode-674749'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-674749-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-674749-m03 --driver=docker  --container-runtime=crio: (34.884046212s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-674749
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-674749: exit status 80 (383.179142ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-674749 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-674749-m03 already exists in multinode-674749-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-674749-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-674749-m03: (1.94584336s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.37s)

                                                
                                    
x
+
TestPreload (118.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-117053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1008 22:36:38.627905    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-117053 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.549764042s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-117053 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-117053 image pull gcr.io/k8s-minikube/busybox: (2.107838813s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-117053
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-117053: (5.753454761s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-117053 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-117053 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (44.934988034s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-117053 image list
helpers_test.go:175: Cleaning up "test-preload-117053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-117053
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-117053: (2.367014499s)
--- PASS: TestPreload (118.95s)

                                                
                                    
x
+
TestScheduledStopUnix (112.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-748542 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-748542 --memory=3072 --driver=docker  --container-runtime=crio: (36.899050425s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-748542 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-748542 -n scheduled-stop-748542
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-748542 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1008 22:38:38.751517    4286 retry.go:31] will retry after 123.249µs: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.752090    4286 retry.go:31] will retry after 117.05µs: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.752356    4286 retry.go:31] will retry after 269.034µs: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.753742    4286 retry.go:31] will retry after 300.808µs: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.754980    4286 retry.go:31] will retry after 591.042µs: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.756062    4286 retry.go:31] will retry after 716.429µs: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.757238    4286 retry.go:31] will retry after 1.590109ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.760542    4286 retry.go:31] will retry after 2.299557ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.763917    4286 retry.go:31] will retry after 3.298465ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.768132    4286 retry.go:31] will retry after 4.060723ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.772314    4286 retry.go:31] will retry after 3.033662ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.775468    4286 retry.go:31] will retry after 9.371966ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.785733    4286 retry.go:31] will retry after 12.508296ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.799036    4286 retry.go:31] will retry after 10.518931ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.809790    4286 retry.go:31] will retry after 30.442598ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
I1008 22:38:38.841061    4286 retry.go:31] will retry after 50.80167ms: open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/scheduled-stop-748542/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-748542 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-748542 -n scheduled-stop-748542
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-748542
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-748542 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1008 22:39:29.998524    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-748542
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-748542: exit status 7 (68.191007ms)

                                                
                                                
-- stdout --
	scheduled-stop-748542
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-748542 -n scheduled-stop-748542
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-748542 -n scheduled-stop-748542: exit status 7 (72.324821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-748542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-748542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-748542: (4.418657521s)
--- PASS: TestScheduledStopUnix (112.96s)

                                                
                                    
x
+
TestInsufficientStorage (14.27s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-299212 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-299212 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.772344193s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"40d022cd-ce39-4494-ba4c-cc31f1a343d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-299212] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffaea488-e328-47cf-bd89-a8bc7d877ed0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21681"}}
	{"specversion":"1.0","id":"2471de33-457e-47a9-bce5-ac4a9dc522e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d7edbda-f8be-443b-846c-8371f8951416","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig"}}
	{"specversion":"1.0","id":"a8ece8f6-923e-404b-9947-0a9b81424e43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube"}}
	{"specversion":"1.0","id":"3a2e43b2-169a-49c8-a2f1-2fcae163e1e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d14cc642-e8db-422a-a2cc-2150442dd9bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd88d577-8d89-4913-92b8-c0cf8b98a4e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"793b00f9-a020-452f-a69a-cc9d47a49e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"791499fd-cab4-4e00-a3e7-53dba10d289e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1123005-8115-41d1-a742-76ce7a7b4a1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"cd9ca46a-31e7-4b55-8440-7233e34bf729","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-299212\" primary control-plane node in \"insufficient-storage-299212\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f06d375-2f78-4e2c-9419-0f6ae3eef121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"317cddc8-5acf-4e5e-90c9-4b1d86f9c47f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4f98307-af95-4eab-849d-3242cf3ef29e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-299212 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-299212 --output=json --layout=cluster: exit status 7 (304.623366ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-299212","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-299212","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 22:40:06.326116  133329 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-299212" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-299212 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-299212 --output=json --layout=cluster: exit status 7 (305.124253ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-299212","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-299212","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1008 22:40:06.632890  133397 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-299212" does not appear in /home/jenkins/minikube-integration/21681-2481/kubeconfig
	E1008 22:40:06.642831  133397 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/insufficient-storage-299212/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-299212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-299212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-299212: (1.88507739s)
--- PASS: TestInsufficientStorage (14.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.499880318 start -p running-upgrade-450799 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.499880318 start -p running-upgrade-450799 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.23447831s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-450799 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1008 22:44:29.998198    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-450799 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.631075134s)
helpers_test.go:175: Cleaning up "running-upgrade-450799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-450799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-450799: (1.993599465s)
--- PASS: TestRunningBinaryUpgrade (65.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (114.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.79112406s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-445308
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-445308: (1.308049186s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-445308 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-445308 status --format={{.Host}}: exit status 7 (92.934916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.613443134s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-445308 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (118.011123ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-445308] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-445308
	    minikube start -p kubernetes-upgrade-445308 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4453082 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-445308 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-445308 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.903375493s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-445308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-445308
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-445308: (2.147001436s)
--- PASS: TestKubernetesUpgrade (114.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.482356991 start -p missing-upgrade-336831 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.482356991 start -p missing-upgrade-336831 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.257784733s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-336831
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-336831
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-336831 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1008 22:41:38.627502    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-336831 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.193625736s)
helpers_test.go:175: Cleaning up "missing-upgrade-336831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-336831
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-336831: (1.987447859s)
--- PASS: TestMissingContainerUpgrade (120.81s)

                                                
                                    
x
+
TestPause/serial/Start (91.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-326566 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-326566 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m31.24550421s)
--- PASS: TestPause/serial/Start (91.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-326566 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-326566 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.202886013s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2488450830 start -p stopped-upgrade-979931 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2488450830 start -p stopped-upgrade-979931 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.657445191s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2488450830 -p stopped-upgrade-979931 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2488450830 -p stopped-upgrade-979931 stop: (1.402572207s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-979931 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-979931 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.664588222s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-979931
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-979931: (1.27288691s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-073474 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-073474 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (91.893751ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-073474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-073474 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-073474 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.91802289s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-073474 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-073474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-073474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.388052186s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-073474 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-073474 status -o json: exit status 2 (305.52534ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-073474","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-073474
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-073474: (1.948425317s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-073474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-073474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.913580804s)
--- PASS: TestNoKubernetes/serial/Start (5.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-073474 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-073474 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.563219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-073474
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-073474: (1.247958582s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-073474 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-073474 --driver=docker  --container-runtime=crio: (6.761582461s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-073474 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-073474 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.548413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-840929 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-840929 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (184.923178ms)

                                                
                                                
-- stdout --
	* [false-840929] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21681
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 22:45:48.856513  166726 out.go:360] Setting OutFile to fd 1 ...
	I1008 22:45:48.856690  166726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:45:48.856700  166726 out.go:374] Setting ErrFile to fd 2...
	I1008 22:45:48.856706  166726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1008 22:45:48.856969  166726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21681-2481/.minikube/bin
	I1008 22:45:48.857394  166726 out.go:368] Setting JSON to false
	I1008 22:45:48.858324  166726 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5299,"bootTime":1759958250,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1008 22:45:48.858393  166726 start.go:141] virtualization:  
	I1008 22:45:48.861945  166726 out.go:179] * [false-840929] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1008 22:45:48.865826  166726 out.go:179]   - MINIKUBE_LOCATION=21681
	I1008 22:45:48.865925  166726 notify.go:220] Checking for updates...
	I1008 22:45:48.872334  166726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 22:45:48.875245  166726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21681-2481/kubeconfig
	I1008 22:45:48.878062  166726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21681-2481/.minikube
	I1008 22:45:48.880806  166726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1008 22:45:48.883646  166726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 22:45:48.887206  166726 config.go:182] Loaded profile config "force-systemd-env-092546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1008 22:45:48.887340  166726 driver.go:421] Setting default libvirt URI to qemu:///system
	I1008 22:45:48.915583  166726 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1008 22:45:48.915711  166726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1008 22:45:48.972538  166726 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-08 22:45:48.963526079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1008 22:45:48.972656  166726 docker.go:318] overlay module found
	I1008 22:45:48.975756  166726 out.go:179] * Using the docker driver based on user configuration
	I1008 22:45:48.978638  166726 start.go:305] selected driver: docker
	I1008 22:45:48.978658  166726 start.go:925] validating driver "docker" against <nil>
	I1008 22:45:48.978672  166726 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 22:45:48.982079  166726 out.go:203] 
	W1008 22:45:48.984933  166726 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1008 22:45:48.987767  166726 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-840929 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-840929" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-840929

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-840929"

                                                
                                                
----------------------- debugLogs end: false-840929 [took: 3.230321413s] --------------------------------
helpers_test.go:175: Cleaning up "false-840929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-840929
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.491676283s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-110407 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b76316e1-0819-46ee-90c6-eb3ec4a3f531] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b76316e1-0819-46ee-90c6-eb3ec4a3f531] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003910065s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-110407 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-110407 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-110407 --alsologtostderr -v=3: (11.891348886s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407: exit status 7 (72.948172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-110407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-110407 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.102528075s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110407 -n old-k8s-version-110407
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wfmhw" [8bb246ef-da35-4f69-8109-efcbd38968b7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004328955s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wfmhw" [8bb246ef-da35-4f69-8109-efcbd38968b7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003225419s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-110407 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-110407 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1008 22:56:38.627178    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m1.554367403s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-939665 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [64834f84-6d88-49e8-81ae-196f4a2bd678] Pending
helpers_test.go:352: "busybox" [64834f84-6d88-49e8-81ae-196f4a2bd678] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [64834f84-6d88-49e8-81ae-196f4a2bd678] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003206293s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-939665 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-939665 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-939665 --alsologtostderr -v=3: (11.840254173s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665: exit status 7 (69.181288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-939665 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (48.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-939665 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.099026559s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-939665 -n no-preload-939665
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (48.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f6ktf" [ed4722e2-72aa-4561-81bb-11312618fca8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003772634s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f6ktf" [ed4722e2-72aa-4561-81bb-11312618fca8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003209535s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-939665 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-939665 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.826137235s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1008 22:59:18.460201    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:18.466621    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:18.477985    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:18.499457    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:18.540828    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:18.622210    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:18.783902    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:19.105666    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:19.747832    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:21.029299    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:23.590798    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:28.712833    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:29.998788    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 22:59:38.954567    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.565592402s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-825429 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [030c16a7-3c27-4d5e-868d-923d85baa808] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [030c16a7-3c27-4d5e-868d-923d85baa808] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003971298s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-825429 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-779490 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8de3ed7b-63b8-4b8f-bc7c-4a46b11e83f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8de3ed7b-63b8-4b8f-bc7c-4a46b11e83f6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.045681413s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-779490 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-825429 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-825429 --alsologtostderr -v=3: (12.015193811s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-779490 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-779490 --alsologtostderr -v=3: (12.04647616s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429: exit status 7 (70.163597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-825429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-825429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.924430567s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-825429 -n embed-certs-825429
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490: exit status 7 (124.751206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-779490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1008 23:00:40.400971    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-779490 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.578447295s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-779490 -n default-k8s-diff-port-779490
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ppnz2" [4ce6d110-8ead-4b00-9c1c-115488a858ef] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004935516s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-449f2" [acd8ad36-747a-4ec7-a0d7-a8bf186ffd52] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003423706s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ppnz2" [4ce6d110-8ead-4b00-9c1c-115488a858ef] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003882794s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-779490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-449f2" [acd8ad36-747a-4ec7-a0d7-a8bf186ffd52] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004458828s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-825429 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-779490 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-825429 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.94571137s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1008 23:01:38.629840    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:51.906733    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:51.913056    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:51.924466    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:51.949901    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:51.994411    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:52.076249    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:52.238283    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:52.559994    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:53.201755    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:54.483951    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:01:57.045451    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:02:02.167344    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:02:02.322795    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:02:12.409370    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m26.000209608s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-598445 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-598445 --alsologtostderr -v=3: (1.247125627s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445: exit status 7 (66.45063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-598445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1008 23:02:32.891048    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-598445 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.28744607s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-598445 -n newest-cni-598445
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-598445 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.60021065s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-840929 "pgrep -a kubelet"
I1008 23:03:03.104258    4286 config.go:182] Loaded profile config "auto-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-840929 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sbpgt" [211e928e-0d26-4b8e-8d8b-5d67f90f3776] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sbpgt" [211e928e-0d26-4b8e-8d8b-5d67f90f3776] Running
E1008 23:03:13.852641    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004504891s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-840929 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.46521983s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-l4d2d" [6c6cf5b3-f9ef-44e5-a64b-773675fedcf7] Running
E1008 23:04:18.460372    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004184568s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-840929 "pgrep -a kubelet"
I1008 23:04:24.480980    4286 config.go:182] Loaded profile config "kindnet-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-840929 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-87vm7" [267027ff-524f-4a86-bca6-ed44c1c1a4b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1008 23:04:29.998379    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-87vm7" [267027ff-524f-4a86-bca6-ed44c1c1a4b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003306726s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-qfxpz" [adfed3eb-2911-4374-969f-aabcef70a2cc] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1008 23:04:35.773985    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003893895s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-840929 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-840929 "pgrep -a kubelet"
I1008 23:04:41.278687    4286 config.go:182] Loaded profile config "calico-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-840929 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g7dmq" [1ee14851-7707-45a6-8d72-5722d5c8409b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1008 23:04:46.164032    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/old-k8s-version-110407/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-g7dmq" [1ee14851-7707-45a6-8d72-5722d5c8409b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006427404s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-840929 exec deployment/netcat -- nslookup kubernetes.default
E1008 23:04:52.676069    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:04:52.682402    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:04:52.693733    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:04:52.715075    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:04:52.756514    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1008 23:04:52.839305    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1008 23:04:53.001448    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1008 23:05:02.949075    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:05:13.190884    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.517396733s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1008 23:05:33.672880    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:05:53.074541    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/addons-961288/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.606078814s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-840929 "pgrep -a kubelet"
I1008 23:06:12.178849    4286 config.go:182] Loaded profile config "custom-flannel-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-840929 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pk5z7" [c77c5933-5590-4769-8b80-f682cb9590e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1008 23:06:14.635691    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pk5z7" [c77c5933-5590-4769-8b80-f682cb9590e6] Running
E1008 23:06:21.692731    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003411709s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-840929 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-840929 "pgrep -a kubelet"
I1008 23:06:38.564943    4286 config.go:182] Loaded profile config "enable-default-cni-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-840929 replace --force -f testdata/netcat-deployment.yaml
E1008 23:06:38.627483    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/functional-101115/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dnmwz" [78a353f9-6f3c-4e77-a91b-54a08b38b934] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dnmwz" [78a353f9-6f3c-4e77-a91b-54a08b38b934] Running
E1008 23:06:51.907503    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003772476s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.391135655s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-840929 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1008 23:07:19.615576    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/no-preload-939665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:07:36.557092    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/default-k8s-diff-port-779490/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-840929 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m19.191858853s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6dkvd" [bcf3040f-b508-40dc-a1b3-3880fa51635d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003004862s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-840929 "pgrep -a kubelet"
I1008 23:07:57.614108    4286 config.go:182] Loaded profile config "flannel-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-840929 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m56sv" [7e494624-e9aa-4ceb-a7c5-1e0c14044098] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m56sv" [7e494624-e9aa-4ceb-a7c5-1e0c14044098] Running
E1008 23:08:03.394405    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:03.401009    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:03.412469    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:03.434008    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:03.475447    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:03.556962    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:03.718484    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:04.040171    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:04.681976    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1008 23:08:05.964081    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004179743s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-840929 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-840929 "pgrep -a kubelet"
I1008 23:08:36.348248    4286 config.go:182] Loaded profile config "bridge-840929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-840929 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m8d9m" [09173018-ecc2-4c33-9922-0e311d2820ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m8d9m" [09173018-ecc2-4c33-9922-0e311d2820ff] Running
E1008 23:08:44.371870    4286 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21681-2481/.minikube/profiles/auto-840929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003852144s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-840929 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-840929 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-889641 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-889641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-889641
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-036919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-036919
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-840929 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-840929" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-840929

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-840929"

                                                
                                                
----------------------- debugLogs end: kubenet-840929 [took: 3.23292867s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-840929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-840929
--- SKIP: TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-840929 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-840929" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-840929

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-840929" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-840929"

                                                
                                                
----------------------- debugLogs end: cilium-840929 [took: 3.634361497s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-840929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-840929
--- SKIP: TestNetworkPlugins/group/cilium (3.79s)

                                                
                                    
Copied to clipboard